5 ways Citigroup is using AI [Case Study] [2026]
From the boardroom to the back office, Citigroup is proving that artificial intelligence is no longer a moon-shot but a practical lever for growth, resilience, and customer delight. This blog unpacks five concrete deployments—spanning generative knowledge assistants, anti-financial-crime analytics, regulation-as-code agents, AI pair-programmers, and client-facing virtual agents—to show how a 200-year-old institution can stitch cutting-edge models into the fabric of a highly regulated global bank. We trace each initiative from the business pain point through implementation and governance to measurable outcomes, revealing lessons that resonate far beyond Wall Street: decide precisely where AI will move the needle, embed airtight controls from day one, and measure capacity created—not merely lines of code or chat messages. Whether you’re a compliance leader pondering automation or a CTO wrestling with tech debt, the stories ahead demonstrate that thoughtful adoption can reduce risk while expanding capability. Citi’s journey reminds us that innovation is less about flashy demos and more about sustained operational excellence. As regulatory pressure tightens and customer expectations soar, these case studies offer a timely roadmap for banking and beyond, illustrating that the future belongs to organizations that can pair creative data science with disciplined execution.
Related: Ways DoorLoop is Using AI [Case Study]
5 ways Citigroup is using AI [Case Study] [2026]
Case Study #1
Citi AI: Deploying Generative-AI at Scale with Citi Assist and Citi Stylus
Business Objective
- Pain-point: Analysts, operations teams and risk officers were losing hours each week hunting through thousands of pages of policies, procedures and legacy PDFs. Inconsistent answers to regulators and auditors added operational risk.
- Goal: “Make work easier and boost productivity for 140 000 colleagues,” as Tim Ryan, Head of Technology & Business Enablement, framed it at launch.
- Strategic lens: Citi’s management viewed generative AI as a horizontal enabler to sharpen regulatory readiness while releasing staff for advisory work that drives fee income.
Implementation
|
Element |
Details |
|
Cloud architecture |
Multi-year agreement with Google Cloud; LLMs run inside Citi’s private Vertex AI tenants with full encryption, role-based access and audit logging. |
|
Products |
Citi Assist performs retrieval-augmented generation over the bank’s policy library; Citi Stylus ingests long documents and returns bullet-point summaries, comparisons or draft emails. |
|
Security & governance |
Only vetted corpora are fed into the RAG pipeline; every answer cites a version-controlled source file, satisfying the consent-order controls imposed on Citi after 2020. |
|
Roll-out cadence |
December 2024 pilot in eight countries (US, Canada, Hungary, India, Ireland, Poland, Singapore, UK) → phased expansion to Hong Kong and other Asia-Pacific hubs by May 2025. |
|
Change-management |
“Red-team-then-ring-fence” approach: risk and compliance users onboarded first; daily office-hour clinics; in-product feedback tied to Jira for rapid prompt-engineering fixes. |
Results (First Six Months)
- Productivity gains – Typical policy queries now resolve in seconds vs. 3-8 minutes previously; document comparison that took analysts half a day is completed “in one pass,” according to CTO David Griffiths.
- Adoption velocity – 34 % of eligible staff launched Citi Assist in the first two weeks; help-desk tickets beginning “Where do I find…?” fell noticeably.
- Regulatory assurance – Auditable, source-grounded answers reduced follow-up queries from internal audit and compliance testing teams.
- Economic metric – Griffiths tracks capacity created: the delta between human-only effort cost and AI-assisted effort cost across 100 sample tasks. Early models show positive ROI even after cloud and licensing expense.
Key Takeaways
- Anchor Gen-AI in a single, high-value workflow. Citi focused on policy retrieval—a universal pain-point—rather than launching a broad “chat with anything” bot.
- Co-engineer with a hyperscaler but own the guard-rails. Google supplied the infrastructure; Citi controlled prompts, embeddings and logs.
- Measure outcomes, not log-ins. “Capacity created” keeps leadership focused on economic value, not vanity usage metrics.
- Roll out in concentric rings. Starting with risk and compliance users hardened the system before expansion to front-office teams.
- Treat governance as a feature, not a trade-off. Version-controlled answers and role-based visibility turned regulatory scrutiny from a blocker into a selling point for wider adoption.
Case Study #2
Fraud Detection and Prevention
Objective:
Citigroup’s primary objective in leveraging AI for fraud detection is to significantly enhance the speed, accuracy, and effectiveness of identifying fraudulent financial activities. Given the massive volume of global transactions processed daily, the bank sought a solution that could monitor transactions in real-time, identify suspicious behaviors, and reduce false positives. The aim is to protect customer accounts, maintain trust, and minimize financial and reputational losses.
Financial institutions face increasingly sophisticated fraud techniques, including synthetic identity fraud, phishing scams, account takeovers, and insider threats. Traditional rule-based systems were no longer sufficient to detect nuanced fraud patterns. Hence, Citigroup’s goal was to transition from reactive models to a proactive, predictive fraud management system using artificial intelligence.
Implementation of AI:
Citigroup integrated advanced machine learning models and real-time analytics into its transaction monitoring infrastructure. One of its major implementations involves partnering with AI and machine learning platform Feedzai, a company specializing in real-time risk management solutions for financial institutions.
Key elements of the AI implementation include:
- Real-Time Transaction Scanning:Feedzai’s platform processes thousands of data points per second across billions of dollars in transactions. AI models analyze behavior across geographies, transaction amounts, device fingerprinting, and user habits.
- Behavioral Biometrics:The system evaluates individual customer behavior over time to create a risk score for each transaction. This includes typing speed, device usage patterns, and geographic consistency.
- Anomaly Detection and Pattern Recognition:Machine learning algorithms compare incoming transactions to historical patterns, identifying outliers that may indicate fraud. These patterns are constantly updated as new fraud trends emerge.
- Adaptive Learning:The AI system continuously learns from newly labeled fraud cases to improve its future predictions. This feedback loop ensures the system evolves to catch previously unseen tactics.
- Integration with Human Analysts:While AI does the heavy lifting in terms of detection, flagged transactions are routed to compliance officers for review, ensuring human oversight and accountability.
Results:
The implementation of AI-based fraud detection tools has led to several impactful outcomes for Citigroup:
- Faster Detection Times:Fraudulent transactions that may have previously gone unnoticed for hours or even days are now flagged in real time, allowing immediate intervention.
- Reduction in False Positives:AI models have drastically reduced the number of legitimate transactions mistakenly identified as fraud, improving customer satisfaction and reducing operational costs.
- Enhanced Risk Scoring:Citigroup now benefits from much more granular and accurate fraud risk scores per transaction, allowing better prioritization of high-risk alerts.
- Scalability Across Regions:AI has enabled Citigroup to apply uniform fraud detection models across its global network, enhancing compliance and customer protection across different regulatory environments.
- Cost Savings:By automating parts of the fraud detection workflow, Citigroup has significantly reduced reliance on manual review teams and legacy systems, translating into millions of dollars in operational savings.
Takeaway:
Citigroup’s AI-powered fraud detection initiative highlights a critical transformation in financial security. The move from rules-based systems to intelligent, adaptive AI-driven models has not only made the bank more secure but also more efficient and customer-friendly. By embracing advanced machine learning and behavioral analytics, Citigroup has created a proactive fraud prevention framework that learns continuously, scales globally, and reduces operational friction.
The key takeaway is that AI isn’t just a supportive tool—it’s becoming a central pillar in financial crime prevention strategies. As fraudsters evolve, AI’s ability to learn, adapt, and respond in real time will remain a crucial advantage for Citigroup and other major financial institutions. This strategic investment reflects a broader trend where AI is redefining risk management and trust in digital banking.
Case Study #3
Enhancing Risk Management and Compliance
Objective:
Citigroup’s goal in deploying AI for risk management and regulatory compliance is to strengthen its ability to proactively identify, assess, and mitigate financial and operational risks in a highly regulated and complex global environment. As a global bank operating in over 160 countries, Citi must comply with a broad range of regulatory frameworks, such as anti-money laundering (AML), Know Your Customer (KYC), Basel III, Dodd-Frank, GDPR, and more. The objective is to use AI to reduce regulatory breaches, prevent fines, improve internal governance, and streamline audits.
Manual compliance processes—often dependent on large teams of analysts—were increasingly proving to be inefficient, error-prone, and unsustainable given the growing volume of financial data and rapidly changing regulations. Citigroup needed a smarter, faster, and more scalable solution to maintain oversight across geographies, asset classes, and counterparties.
Implementation of AI:
Citigroup integrated artificial intelligence into its risk and compliance infrastructure through several layers of technology and partnerships. These implementations involve machine learning, natural language processing (NLP), and advanced analytics.
Key components of the AI deployment include:
- Automated Regulatory Monitoring:Citigroup leverages NLP tools to scan through thousands of pages of regulatory updates, identifying key changes that affect business operations. This automation reduces dependency on manual legal review and ensures timely adjustments to compliance procedures.
- Risk Modeling with Machine Learning:Citigroup uses machine learning algorithms to develop predictive risk models that analyze historical data and external market conditions. These models help the risk teams forecast credit risk, market volatility, operational disruptions, and counterparty defaults.
- AML and KYC Enhancements:AI is applied to detect patterns of suspicious behavior that may not be flagged by traditional rule-based systems. Algorithms evaluate customer transactions, relationships, and metadata to identify potential money laundering or fraudulent account activities. Enhanced due diligence and identity verification also benefit from AI’s ability to cross-reference global datasets.
- Transaction Monitoring and Alert Optimization:Citigroup’s systems use AI to filter and prioritize alerts, enabling analysts to focus on high-risk transactions while reducing false positives. AI evaluates transaction metadata (location, frequency, counterparties) to generate more accurate risk scores.
- Model Risk Governance:Citi applies AI tools not just in modeling but in monitoring the risk of the models themselves. This ensures model accuracy, fairness, and regulatory compliance—important for maintaining trust with regulators and clients.
Results:
Citigroup’s application of AI in compliance and risk management has yielded measurable benefits across operational, financial, and regulatory dimensions:
- Improved Compliance Efficiency:Automated document review and regulatory parsing drastically reduced the time needed to interpret new compliance requirements. In some cases, AI tools cut review time from weeks to hours.
- Lowered False Positive Rates in AML Alerts:AI-driven transaction monitoring systems resulted in up to a 30–40% reduction in false positives, enabling compliance officers to focus on genuine threats and reduce alert fatigue.
- Proactive Risk Identification:Predictive analytics allowed Citi’s risk managers to spot emerging credit or operational risks before they could escalate, improving capital allocation and contingency planning.
- Consistent Global Oversight:AI enabled uniform compliance and risk standards to be applied across Citigroup’s worldwide network, supporting better audit readiness and risk alignment in diverse regulatory jurisdictions.
- Regulatory Confidence:By showing regulators that AI is being responsibly used—with built-in transparency, explainability, and audit trails—Citigroup strengthened its standing with international regulatory bodies.
Takeaway:
Citigroup’s use of AI in risk management and compliance demonstrates how machine intelligence can fundamentally reshape how financial institutions handle regulatory complexity and operational uncertainty. What used to require massive teams of analysts, lawyers, and auditors can now be augmented by AI tools that work 24/7, continuously learning and adapting to global shifts.
The key takeaway is that AI empowers Citigroup not just to meet regulatory standards, but to exceed them—transforming compliance from a reactive obligation into a proactive strategic asset. With increasing pressure from regulators and stakeholders for transparency and accountability, AI’s role in compliance will only grow in importance. Citigroup’s early and comprehensive adoption of these tools positions it as a leader in intelligent governance and future-ready risk architecture.
Related: Ways LucidLink is Using AI [Case Study]
Case Study #4
Deployment of Generative AI Tools for Employees
Objective:
Citigroup’s objective in deploying generative AI tools internally is to streamline employee productivity, improve internal knowledge retrieval, and enhance decision-making across its global workforce. With over 240,000 employees worldwide, the bank manages vast repositories of policies, documents, procedures, and operational data. Navigating this information landscape has traditionally been a time-consuming process.
The goal was to reduce inefficiencies caused by fragmented data systems and repetitive manual tasks, while empowering employees—particularly in operations, compliance, risk, legal, and finance functions—to access insights quickly and accurately. This move aligns with Citigroup’s broader digital transformation strategy aimed at driving operational excellence and maintaining a competitive edge through technological innovation.
Implementation of AI:
Citigroup’s implementation involved the strategic rollout of proprietary generative AI tools developed in-house and refined with oversight from its technology and risk teams. These tools were piloted, tested for compliance and security, and then made available across eight countries to approximately 140,000 employees.
The two flagship generative AI tools currently in use are:
- Citi Assist
A natural language-based assistant that helps employees query internal systems using plain English. Employees can ask questions related to internal policies, workflows, HR protocols, IT processes, compliance guidelines, or even business strategy references. Citi Assist acts as a smart internal search engine that reduces reliance on manuals, SharePoint directories, and static documents. - Citi Stylus
This tool enables employees tosummarize, compare, and analyze documents in real time. For example, if a compliance officer is reviewing several legal contracts, Citi Stylus can distill the key differences and highlight regulatory implications. It’s also used for comparing procedural documents, evaluating business strategies, or identifying inconsistencies across reports.
Both tools are powered by large language models (LLMs) tailored for enterprise use, with built-in data protection layers, access controls, and audit capabilities. These implementations have been integrated with Citi’s internal cloud infrastructure and customized for domain-specific knowledge—ensuring the AI tools understand finance- and banking-specific terminology.
Results:
The deployment of these tools has already shown promising outcomes, both qualitatively and quantitatively:
- Time Savings Across Functions:Early adopters report a significant reduction in time spent searching for internal information. In some cases, tasks that took 30–45 minutes—like finding the right policy document or compliance procedure—are now completed in under 5 minutes.
- Increased Efficiency in Document Analysis:Legal, compliance, and audit teams are leveraging Citi Stylus to reduce hours spent comparing documents. Document reviews that previously took several hours are now completed in a fraction of the time.
- High User Adoption and Satisfaction:A majority of the 140,000 employees who have access to the tools are actively using them. Feedback indicates that the tools are intuitive and significantly reduce frustration and dependency on manual processes.
- Better Knowledge Management:With generative AI streamlining internal search and summarization, institutional knowledge is more accessible, and the risk of operational silos has been reduced. Teams across departments can collaborate more effectively when data and procedures are easier to access and understand.
- Scalable Deployment Model:The success of this initiative has encouraged Citigroup to consider expanding generative AI access to more functions and geographies, along with exploring use cases in client-facing roles.
Takeaway:
Citigroup’s deployment of generative AI tools is a prime example of using AI to empower employees, not replace them. Rather than automating away jobs, Citi has focused on giving its workforce smarter tools to do their jobs better, faster, and with more confidence. This approach respects regulatory sensitivities while embracing innovation in a controlled and scalable manner.
The key takeaway is that generative AI has a meaningful role to play in internal operations, especially within knowledge-heavy industries like banking. Citigroup’s early and thoughtful adoption of these tools showcases a blueprint for responsible enterprise AI usage—balancing efficiency gains with data security, compliance, and employee support. As generative AI continues to evolve, Citi is well-positioned to lead in intelligent workforce augmentation across the financial sector.
Case Study #5
Citi’s Client-Facing Virtual Agents: From Call-Centre IVA to Mobile “Chat-with-RM”
Citi set out to transform customer service by replacing rigid IVR menus and costly live-agent queues with an intelligent, always-on virtual-agent layer that could authenticate callers, resolve routine card and retail-bank inquiries in real time, and escalate complex cases—complete with full interaction context—to human specialists. The bank’s goals were threefold: (1) cut per-interaction costs and relieve call-centre congestion by driving self-service containment above 50 percent; (2) lift customer-satisfaction and speed-to-answer metrics for a global client base that demands 24-hour support; and (3) meet stringent OCC and Fed model-risk standards by logging every bot utterance, maintaining a robust human-in-the-loop safety net, and protecting customer data inside Citi’s secure perimeter.
Implementation
|
Element |
Execution Highlights |
|
Conversational core |
Citi licensed Interactions’ Adaptive Understanding® engine—speech recognition, intent detection and dialogue management blended with real-time human interception for edge cases. |
|
Call-centre IVA |
Deployed to Commercial Cards IVR; authenticates callers, surfaces balance, spend-limit and dispute info, and escalates rich context to live agents. Additional call types (payment postings, token re-issues) added in 2022–24. |
|
Audit & security |
Voiceprints mapped to account IDs; every intent, response and escalation logged to Citi’s ServiceNow instance for OCC audits. |
|
Mobile “Chat-with-RM” |
Singapore pilot lets Citigold clients tap a “Chat Now” button in the Citi Mobile® app to message their relationship manager, with AI intent triage and secure document upload. |
|
Gen-AI stance |
Citi has not released an open-ended LLM bot to consumers, citing hallucination risk and data-privacy concerns; instead, generative models summarise the customer’s session for agents and craft post-chat wrap-ups. |
|
Change management |
Red-team “jail-break” tests, weekly prompt tuning, and scorecards on containment, escalations and CSAT feed directly into the vendor roadmap. |
Results (First 48 Months)
|
KPI |
Baseline (2019) |
With IVA / Chat |
Δ |
|
Calls handled by IVA |
0 |
≈15 million cumulative |
n/a |
|
Containment rate |
~20 % IVR self-service |
52 % (all lines) |
+32 p.p. |
|
Annual service-cost saving |
— |
US $6.6 m/year |
tangible |
|
CSAT (1-5) |
3.9 |
4.6 |
+0.7 (+19 %) |
|
Speed-to-answer |
35 s IVR queue |
sub-5 s IVA greeting |
–30 s |
|
Mobile chat adoption |
n/a |
24 % of eligible Citigold clients used chat within first 90 days |
pilot metric |
Note: containment and savings figures come from an Interactions case study describing a “Fortune 50 financial-services client” widely understood in vendor briefings to be Citi Commercial Cards.
Key Takeaways
- Voice leads, text follows.Citi started in the highest-cost channel (voice) where ROI was easiest to prove before fanning out to chat and in-app messaging.
- Blend AI and humans.Interactions routes edge utterances to a “human-in-the-loop” whisper layer, preserving experience while training the model.
- Containmentand Every escalated call carries a JSON payload of the IVA’s dialogue and backend look-ups, cutting agent handle time.
- Governance first.Citi’s refusal—so far—to launch a fully generative public chatbot underscores that consumer-facing AI must clear higher bars than employee tools.
- Measure business outcomes, not bot pings.Savings-per-contained-call and CSAT lift keep leadership focused on value, not vanity metrics.
Related: Ways Rippling is Using AI [Case Study]
Conclusion
Citigroup’s AI portfolio underscores a deceptively simple insight: technology is transformative only when it is trusted and tightly aligned with strategic priorities. Across five distinct domains, the bank has reclaimed analyst hours, cut fraud noise nearly in half, shrunk regulatory impact assessments from weeks to days, accelerated mainframe retirement plans, and elevated customer satisfaction—all while operating under the industry’s strictest guard-rails. The common threads are clear: executive sponsorship, cloud perimeters that keep proprietary data inside, human-in-the-loop oversight, and metrics that chase tangible economic value rather than buzz-word compliance. By making governance a feature, not a burden, Citi shows regulators that responsible AI can enhance—not erode—control. By measuring “capacity created” and “minutes saved,” it gives shareholders a vocabulary for ROI that extends beyond technology spend. And by rolling tools out in concentric rings, it protects both customers and brand as models mature. The next milestones—real-time payment screening, global reg-code coverage, and full-stack prompt auditability—signal that transparency will remain the guiding star. For any organisation shaping its own AI agenda, Citi’s experience offers a final lesson: start with problems worth solving, architect for accountability, and let results compound. The future will favor those who make that formula a habit.