Projects

jido

Python, PyTorch, NVML, ROCm

ML systems toolkit that detects CPU and GPU capabilities across three vendors, discovers installed inference backends, and runs standardized kernel benchmarks with reproducible, cross-machine result schemas.

  • Multi-vendor GPU detection (NVIDIA, AMD, Intel) with deterministic machine fingerprinting
  • Runtime discovery of seven inference backends (PyTorch, ONNX Runtime, vLLM, TensorRT, llama.cpp, OpenVINO, DeepSpeed)
  • Benchmark runner for matmul, attention, and conv2d with timing percentiles, memory tracking, and correctness validation

Soft Axiom

Python, FastAPI, Next.js, PostgreSQL, Vercel, AWS

Full-stack AI product exploration that uses RAG and cloud infrastructure to study real engineering decisions: asking better questions, finding the right issues to solve with AI, and shipping a secure MVP at the right time.

  • End-to-end stack across Next.js/Vercel, FastAPI/AWS App Runner, AWS RDS (pgvector), and AWS Cognito
  • AI-assisted debugging workflow centered on framing the right question, isolating root issues, and fixing fast
  • Continuous price/performance evaluation (Vercel vs AWS), MVP scoping, launch timing, and privacy/security hardening

soft-axiom-data

Python, PyTorch, Hugging Face, LoRA, PEFT

Data collection and curation pipeline for fine-tuning a domain-adapted language model that powers Soft Axiom, replacing generic API inference with a task-specific model.

  • Synthetic and real data sourcing from ingested documents, RAG interactions, and public datasets
  • Deduplication, quality scoring, and provenance tracking for training-set integrity
  • Parameter-efficient fine-tuning with LoRA via Hugging Face PEFT

vla-world-model-control

Python, PyTorch, Isaac Sim, Isaac Lab

Independent study at The Cooper Union under Prof. Carl Sable exploring VLA and world-model integration for long-horizon robotic control, benchmarked in NVIDIA Isaac Sim.

  • Simulation stack on NVIDIA RTX 5060 Ti 16 GB with Isaac Sim, Isaac Lab, and Isaac Lab Arena
  • Baseline VLA evaluations across standardized manipulation and navigation benchmarks
  • World-model augmentation study targeting improved multi-step task completion

vla-video-data

Python, PyTorch, FFmpeg, OpenCV, Hugging Face

Video data collection and processing pipeline for training and fine-tuning vision-language-action models, feeding the vla-world-model-control project with curated datasets.

  • Multi-source ingestion from Isaac Sim recordings, public robotics datasets (DROID, Open X-Embodiment), and demonstrations
  • Frame extraction, resolution normalization, and frame-action alignment pipeline
  • Dataset curation with diversity balancing, duplicate removal, and quality filtering