About
I build multi-agent systems using small language models for professional document analysis. My research focuses on reliability—using validation models to corroborate outcomes in MAS frameworks, and fine-tuning SLMs to handle domain-specific tasks that require both accuracy and verifiable decision-making.
The core challenge I'm tackling: how do you make AI systems reliable enough for high-stakes professional work? My approach combines specialized fine-tuned models with verification architectures. When one model makes a classification or extracts information, another validates it. When decisions matter, multiple models reach consensus. This isn't about replacing human judgment—it's about building systems that know when they need human oversight.
Before focusing on AI research, I worked in real estate and accounting, which gave me firsthand experience with the kind of semi-structured professional tasks that sit between simple automation and creative problem-solving. That background now informs how I design systems that need to be both capable and trustworthy.
Technical Background
Core: Python, SQL (PostgreSQL, MySQL), R, Machine Learning, Natural Language Processing
ML/AI: TensorFlow, Scikit-learn, LLMs, Multi-agent Systems, Fine-tuning, OCR (Tesseract)
Data Science: Pandas, NumPy, Statistical Modeling, Jupyter, Matplotlib, Seaborn
Education
MSc Artificial Intelligence and Data Science (In Progress, Distinction expected)
University of Hull, 2025–2026
Thesis: "Development and assessment of contract review outcomes from a multi-agent system using fine-tuned small language models"
BSc Accounting
University of South Florida, 2013–2016
Current Research
Patent Pending: Autonomous Real Estate Brokerage System
Dual-LLM architecture for automated document classification and compliant response generation. Multi-agent consensus mechanism for high-stakes decisions, with ongoing comparative analysis of AI vs. human performance metrics.
Master's Thesis
Development and assessment of contract review outcomes from a multi-agent system using fine-tuned small language models. Investigating performance of SLM-driven contract review systems with corroborating verification models, with emphasis on accuracy, efficiency, and compliance in legal documentation.
My research investigates whether fine-tuned small language models can perform contract review tasks with accuracy and efficiency comparable to human experts. The work focuses on three key challenges:
- Building reliable multi-agent systems that properly hand off context between specialized models
- Determining when automated systems should escalate to human review
- Measuring performance in ways that matter for production deployment (not just accuracy, but false positive rates, edge case handling, compliance risk)
This research combines supervised fine-tuning of domain-specific language models with verification mechanisms that catch errors before they reach production. The goal is systems that augment rather than replace professional judgment.