Q&A with Michael Borrelli at AI & Partners
Kevin Smith, Payments Risk Director at the Payments Consulting Network[1] chatted recently with Michael Borrelli, a Director at AI & Partners[2].
AI & Partners is an AI governance software company helping organisations comply with the EU AI Act through an integrated platform for AI system discovery, risk management and regulatory reporting. Its tools and advisory services enable firms to inventory AI assets, assess risks and embed responsible AI practices at scale. As a director, Michael leads client strategy and governance solutions, translating complex regulatory requirements into practical compliance and oversight frameworks.
Michael is speaking on day one of the Pay360 London event on 25 March 2026[3]. This was an opportunity to catch-up with Michael beforehand and understand a little more about his passion for all things to do with AI governance, its growing importance and what it means for the financial services industry.
Read the full interview below.
KS: Thank you for the opportunity to chat Michael. Could you please provide an overview of who is AI & Partners?
MB: AI & Partners operates globally with offices in London, Amsterdam and Singapore, supporting organisations as they navigate emerging AI regulation, including the EU AI Act. The team has developed significant expertise in AI governance and compliance requirements across multiple sectors and jurisdictions.
As AI deployment accelerates globally, our focus is helping organisations adopt AI responsibly — balancing innovation with effective governance, risk management and regulatory compliance.
Our core solution is an AI governance platform that provides AI discovery, automated inventory, AI risk self-assessments and model monitoring capabilities.
KS: You are attending Pay360 in London, what does AI governance mean for the financial services and payments community?
MB: The financial services environment is a fast adopter and user of AI capabilities. It is used to address the end-to-end lifecycle of a customer and third-party relationships with a financial institution – whether a regulated entity or not.
It is providing greater and faster use of data to enable more accurate and more data-driven decisioning. It facilitates enhanced management of relationships with customers, third parties, regulatory authorities and more. It also services to provide management with enhanced reporting and assessment.
It has numerous applications including
- enhanced onboarding and ongoing monitoring of clients for KYC/AML purposes
- more effective third party and partner engagement and management
- more timely transaction and fraud monitoring
- improved credit scoring
- regulatory and internal reporting
- emerging embedded finance and payment opportunities.
However, the cynic in me suggests that AI deployments are introducing a type of organizational "psychosis". Too often we design systems that simply confirm what we expect to see. There can be an over-reliance on AI outputs without sufficient challenge to the underlying assumptions, training data, or model behaviour. Without appropriate governance and human oversight, organizations risk trusting AI outputs without adequate sense-checking.
Alongside many other business units and disciplines, this is a tool that assists fraud, risk and compliance specialists. It provides new tools and services to management, in particular those identified under the SM&CR[4], to reduce harm to consumers and protect market integrity. These individuals are held accountable by both firms and regulators.
Regulated financial institutions have a fiduciary duty. AI usage must be to enhance this through better and faster decisioning. It must not be allowed to undermine or weaken our risk and compliance management obligations and responsibilities.
KS: What does this mean for financial institutions?
MB: AI models are powerful tools, that will bring positive change and new opportunities. They will displace some roles and create others. They will help us to know more about our customers, our third parties, even the bad actors who are trying to perpetrate frauds, illegal transactions or break our infrastructure.
However, AI models can introduce bias through the quality of the data input and analysis or model degradation. As AI models become more complex — and potentially incorporate technologies such as quantum computing and advanced machine learning — organisations will face increasing challenges around explainability, validation, and model oversight. The question is not only what these systems can do, but whether firms have the governance structures in place to manage them responsibly.
Another emerging challenge is the growing complexity of AI architectures, particularly where organizations rely on multiple external APIs and data sources. As more systems are interconnected, firms must carefully manage issues such as data integrity, third-party risk, operational resilience, and security.
We must be able to demonstrate the security and integrity of our data and systems through ISO accreditation.
Additionally, we must take advantage of seeking guidance and advice. We need to understand and learn industry best practices.
Through greater collaboration and partnership, financial institutions will understand the power and opportunities at their fingertips and how best to use and control it.
A positive example of this is the recent announcement that Goldman Sachs is collaborating with AI startup Anthropic, to develop and deploy AI-powered agents. This is designed to automate complex back-office tasks like trade and transaction accounting, compliance, and client onboarding.
The partnership highlights a significant move forward by a global financial institution to embed AI agents directly into their core banking operations. This helps them move beyond simple chatbots to autonomous, task-oriented systems.
This development utilises Anthropic's Claude model, which I am sure we will see and hear more about.
KS: Is the presence or absence of AI legislation a help or a hinderance?
MB: With no overarching statutory AI Act today in the UK, AI is currently governed by a mixture of existing legislation, including GDPR.
In contrast, the EU AI Act adopted in February 2025, is designed to be collaborative and facilitate greater adoption of AI. However, it is viewed by many as too restrictive.
It is predicted that the UK will move towards adopting more specific AI legislation in 2026. This is welcomed across multiple industries to allay concerns and promote benefits.
This will drive the UK from the purely voluntary, pro-innovation approach to a framework targeting safety, security, and potential risks. It is expected that the upcoming legislation will focus on the need to regulate the "most advanced AI models," to establish a central AI Authority, and enhance accountability for developers.
KS: What are the key AI takeaways for participants in the financial services community?
MB: I often refer to this as "Know Your AI" (KYAI), not to be confused with the Indonesian term Kyai.
The concept is straightforward: organisations should understand why AI is being deployed, how their systems and data operate, what risks the models introduce, and how third-party providers are integrated and governed.
In practical terms this means clearly identifying the business purpose for AI, understanding the data and systems that underpin it, performing structured risk assessments, and maintaining strong oversight and management of third-party technologies and partnerships.
An ill-informed AI approach will deliver reputational damage amongst other impacts and restrictions.
Remember, AI governance and usage is not just a Board level problem. Liability for getting this wrong does not just sit with the company, but also its employees and contractors.
There is a case here of 'buyer beware'. AI adopters must be well informed and risk aware.
With respect to AI adoption and deployment, organizations and individuals must operate with their "eyes wide open".
***
[1] https://paymentsconsulting.com
[2] https://www.ai-and-partners.com
[4] The FCA Senior Managers and Certification Regime (SM&CR)
***
Author: Kevin Smith, Payments Risk Director, London, Payments Consulting Network
Kevin has over 30 years' experience in the retail management, financial services and payments industries. With 17 years at Visa globally he has a proven track record in developing and executing innovative and practical business strategy, product development and service definition in card acceptance and acquiring. With both marketing and risk management backgrounds, he brings a pragmatic approach to business development.
***
Payments Consulting Network is a media partner of Pay360 2026 happening on 25 – 26 March 2026 at Hall S6, Excel, London.
👉 Secure your place today: https://pay360event.com/
***
If you found this article helpful and would like to read similar articles, please subscribe to our newsletter.




