You are viewing a preview of this job. Log in or register to view more details about this job.

AI Engineering Lead

AI Engineering Lead

 

Overview

At Aden, artificial intelligence is not just a feature; it is the core driver of our infrastructure. We are seeking a deeply technical, highly analytical Applied AI Engineer to join our core engineering team. In this role, you will be responsible for bridging the gap between cutting-edge AI research and production-grade enterprise software. You will design, build, and deploy the intelligent systems that power our infrastructure, focusing on large language models (LLMs), autonomous agent frameworks, and advanced data pipelines. You will tackle some of the most complex challenges in the industry: ensuring consistent, highly accurate, and secure outputs from probabilistic models within a strictly regulated enterprise environment. If you are passionate about building robust AI systems that can independently parse complex data, reason through multi-step transactional logic, and execute actions flawlessly, this is the role for you.

Responsibilities

  • Model Deployment & Optimization: Design, deploy, and maintain state-of-the-art foundation models and custom-trained AI systems within a high-throughput, low-latency B2B enterprise environment. You will optimize inference performance and manage resource allocation to ensure cost-effective scaling.
  • Agentic Framework Development: Architect and implement robust, multi-agent workflows capable of handling complex operations, from risk assessment and fraud detection to automated reconciliation and dynamic routing.
  • RAG & Data Pipelines: Build and optimize Retrieval-Augmented Generation (RAG) pipelines that ingest, vectorize, and retrieve massive volumes of structured and unstructured data (e.g., market feeds, SEC filings, ledger data) in real-time.
  • Evaluation & Alignment: Develop rigorous, automated testing frameworks to evaluate model performance, specifically focusing on hallucination reduction, mathematical accuracy, and strict adherence to compliance standards.
  • Infrastructure Integration: Work closely with backend engineers to integrate AI services seamlessly into our core Rust/Go/Python-based APIs, ensuring absolute fault tolerance and idempotency.
  • Continuous Improvement: Stay at the absolute bleeding edge of AI research, continuously evaluating new models, open-source agent development frameworks, and optimization techniques to keep Aden’s infrastructure ahead of the curve.

Qualifications

  • Experience: 4+ years of software engineering experience, with at least 2 years dedicated to deploying machine learning models, LLMs, or complex AI systems in production environments.
  • Technical Stack: Expert-level proficiency in Python. Deep understanding of ML frameworks (PyTorch, TensorFlow) and LLM orchestration tools (LangChain, LlamaIndex, or proprietary/custom frameworks).
  • System Design: Strong background in distributed systems, vector databases (Pinecone, Weaviate, Milvus), and cloud infrastructure (AWS/GCP).
  • Domain Knowledge: Solid grasp of transformer architectures, fine-tuning methodologies (LoRA, QLoRA), and prompting strategies for complex reasoning tasks.
  • Mindset: A relentless focus on security, accuracy, and edge-case handling, which is absolutely critical when building AI for applications.

Nice to Haves

  • Previous experience working in large scale data-intensive companies.
  • Active contributions to open-source AI projects or agent development frameworks.
  • Proficiency in systems-level languages for performance-critical path optimization.

What We Offer

  • Competitive salary and substantial equity package.
  • Comprehensive health, dental, and vision insurance.
  • The opportunity to build the underlying architecture for the AI economy from the ground up.
  • Flexible, remote-friendly work culture with regular team offsites.

Tech Stack

  • Core Languages: Python (3.11+), Bash/PowerShell for cross-platform scripting, Typscript, SQL
  • AI Tooling & Environments: Claude Code, Codex CLI, Cursor, Opencode, Antigravity IDE.
  • Foundation Models: OpenAI (GPT-4o/o1), Anthropic (Claude 3.5 Sonnet/Opus), Gemini, and open-weight models.
  • Architecture: Model Context Protocol (MCP), LiteLLM, uv (for Python dependency and workspace management).
  • Cloud Platforms: GCP/AWS
  • Containerization: Docker, Kubernetes
  • Version Control: Git
  • Database: PostgreSQL, Mongo, Redis, Kafka