AI engineered for precision, speed, and reliability.
AI engineered for precision, speed, and reliability.
Phase 1
Refine the query
In order to optimize the accuracy of an answer that an LLM generates, the inputs the LLM receives must be refined for comprehension. The clearer and more understandable the query, the better the output.
Specification
- 1.1Check for safety and relevance
- 1.2Optimize query comprehension
- 1.3Check for Workflows automation
- 1.4Check for Custom Answer
Phase 2
Generate a response
Once a query has been checked and optimized the next stage is to generate a response using the LLM. For this task, the Fin AI Engine™ has been designed to use a bespoke and enhanced retrieval augmented generation architecture (RAG for short).
Specification
- 2.1Optimize retrieval
- 2.2Integrate and augment
- 2.3Generate response
Phase 3
Validate accuracy
In the final step of the process, the Fin AI Engine™ performs checks to understand whether the output from the LLM meets the necessary response accuracy and safety standards.
Specification
- 3.1Validate the response
- 3.2Respond to customer
always-on
Engine optimization
To calibrate and enhance engine performance, the Fin AI Engine™ has advanced integrated tools that help optimize answer generation, efficiency, precision, and coverage.
Specification
- 4.1Fin customization and control
- 4.2AI analytics and reporting
- 4.3AI recommendations
safeguarding
AI Trust & Security
Intercom has implemented state-of-the-art security measures to protect Fin against a wide range of LLM threats, including those identified by the OWASP LLM Top 10. By consistently testing a variety of high-end LLMs, and deploying rigorous internal controls, security protocols, and safeguards, Fin is able to achieve the highest level of security and reliability while avoiding potential limitations and threats.
Specification
- 5.1Fin AI Security
- 5.2Regional hosting
- 5.3Compliance: International Standards
- 5.4Third-party AI Providers