AODocs is a software company created in 2012 that makes a Content Services SaaS platform. It is included by Gartner (in its Magic Quadrant) and Forrester (in its Wave) as one of the top and most innovative players in the content services space, in a market populated by legacy on-premises competitors like IBM FileNet, Documentum, and OpenText that are approaching end of life. Many customers of these legacy products are looking to replace them with a modern SaaS solution like AODocs.
Our product is used by large organizations such as Google, Veolia or Colgate to control their critical documents, protecting them against costly human errors while accelerating key business processes. Our client’s main use cases are document control for large engineering projects, standard operating procedures, quality management, consulting and audit reports, and more generally, all business processes involving important documents in professional services, healthcare, HR, procurement, legal, and more.
The generative AI wave represents an important opportunity for AODocs, as more and more companies realize that AI assistants cannot work on messy information. AODocs’ ability to control documents and their versions, can be used to ensure AI assistants only work on the right, validated content, thus allowing AODocs to benefit from the AI market traction.
Our team of 130 is composed of highly motivated and competent people. We believe that good ideas can come from anyone, regardless of their formal job role.
The position
Do you like having some creative freedom where your ideas can be easily discussed and implemented in a small and dynamic company? How about being able to have a high impact on a product with millions of users? Do you like the possibilities offered by new cloud technologies, especially serverless? You’ve knocked at the right door.
We are looking for a highly motivated Staff Engineer to join our growing team! The Staff Engineer contributes to the design and evolution of core parts of our platform and helps drive the overall technical direction of the system.
We expect you to help us take our products and our team to the next technical level and to teach us something we don’t know.
We’re a transparent organization. Important metrics and numbers are communicated to all team members. Decisions are discussed collaboratively, not behind closed doors. If you value being part of the discussions on how to shape the future of the product, by giving your input, weighing in and being heard, then you might just be happy to work with us.
You will work closely with the Product and Frontend teams, sometimes in squads, and with ad hoc teams meant to quickly address specific matters.
Our infrastructure is entirely on the cloud on Google Cloud Platform. We use Firebase, AppEngine, Cloud Run, Cloud Functions, Pub/Sub. We code mainly in Java and a bit of Javascript, and Go.
Mission
As a Staff Engineer, you will help shape and build the next generation of intelligent document processing at AODocs. You will design and evolve core capabilities such as agentic RAG pipelines, semantic extraction, and a unified search and LLM layer operating at scale across enterprise document corpora.
Your mission is to bring AI into a production-grade platform, with a strong focus on reliability, scalability, and sound engineering decisions.
Core Responsibilities
- Provide technical leadership for the design and deployment of AI capabilities across the platform
- Design and implement AI-native components: agentic RAG pipelines, intelligent document processing (OCR, classification, extraction), and hybrid dense/sparse search
- Own substantial features end-to-end: from technical spec to production
- Drive the integration of LLM capabilities into the existing document platform architecture
- Design and evolve microservices with clean hexagonal boundaries
- Maintain a high engineering bar for performance, reliability, and system quality
What you’ll work on
- You will work on systems indexing and reasoning over large enterprise document corpora where governance, traceability, and reliability are first-class constraints.
- You will help define the architecture that enables AI capabilities to operate reliably within enterprise-grade document governance systems.
- Agentic RAG: build retrieval pipelines that go beyond naive chunking: query planning, re-ranking, multi-hop reasoning over large document corpora
- Intelligent Document Processing: OCR, layout understanding, entity extraction, document classification at scale
- Search: merge semantic vector search with traditional full-text into a unified, latency-aware retrieval layer
- Full-stack delivery: own features across the stack — Java or Python backends, Angular frontends, clean API contracts in between
- Platform: we run on GCP today and are actively moving toward a cloud-agnostic architecture: you’ll design and operate microservices that work across providers, with Cloud Run, Pub/Sub, and GCS as the current baseline
What we’ve shipped recently
- Agentic RAG with a multi-agent architecture: orchestration, tool use, and retrieval working together across large document corpora
- Intelligent Document Processing pipeline: OCR, metadata extraction, and automated document splitting at scale
Profile
- 8+ years on SaaS products, with real production AI/ML systems in your track record
- Backend: Java and Python: fluent in both, you know when to use which
- Frontend: You are comfortable working across the stack when needed, but your primary focus is backend systems and platform architecture.
- Cloud: GCP-native today, cloud-agnostic mindset: you design for portability from the start
- Architecture: hexagonal architecture is not a buzzword to you: you’ve applied ports and adapters on a real codebase under real constraints
- Microservices: you’ve designed, deployed, and operated distributed services in production
- AI Systems: you understand how LLMs and embedding models work under the hood: not just how to call an API. You know their failure modes, context limitations, and cost/latency tradeoffs. You’ve integrated AI components (retrieval, generation, classification) into a production ecosystem and reasoned about the architecture as a whole: where AI belongs, where it doesn’t, and how to make the seams clean
- You understand that speed and reliability are product features, not engineering vanity
- Fully fluent in English: specs, design docs, and PRs written with native-level precision
Flexible full remote policy from France with monthly visit in the office in Paris