Practical research at the intersection of AI safety, security, and content authenticity.
We build things that work, then share what we learn.
Research Focus
AI Safety & Alignment
We focus on the interface layer—how humans communicate intent to AI systems and how those systems can better preserve that intent. This is where misalignment often starts.
Human-AI communication patterns
Intent preservation across interactions
Prompt injection and adversarial robustness
Safe defaults for children and vulnerable users
AI-Powered Security
Developing intelligent security systems that leverage machine learning for threat detection, monitoring, and response.
Intelligent surveillance systems
Automated threat detection
Physical-digital security integration
Content Verification
Building tools and techniques to authenticate digital content and detect AI-generated or manipulated media.
AI content detection
Digital provenance
Authentication systems
Active Projects
Child AI Safety
Active
Developing protective AI systems designed specifically for children's interactions with AI technology.
Status: In Development
Prompt Injection Detection
Active
Research into detecting and preventing adversarial attacks on language models through prompt manipulation.
Status: Ongoing Research
AI Fingerprinting
Active
Techniques for identifying and verifying the source and authenticity of AI-generated content.
Status: Collaborative Project
Interested in Our Research?
We're always open to discussing our work and exploring collaboration opportunities.