Partner Content The banking industry in Asia Pacific (APAC) is thriving, with strong financial performance underpinning its technological ambitions.
In 2023, net revenue growth for the top 55 banks averaged nearly 5 percent, with some countries, like Singapore, seeing extraordinary growth of over 24 percent. This success has been fuelled by significant ICT investments, with an average ICT-to-revenue ratio of 3.83 percent. It’s no coincidence that much of this spending is directed toward artificial intelligence (AI), which is reshaping customer experiences and internal operations at an unprecedented pace.
More than 80 percent of banks across APAC are using AI to elevate their customer engagement, and nearly half of customer interactions are now AI-enabled. Banks like DBS and OCBC have embraced AI to personalize services and empower employees. DBS, for instance, leverages over 100 AI algorithms for tailored customer recommendations, while OCBC deploys a GPT-powered chatbot to support over 30,000 employees in tasks ranging from research to ideation. Beyond customer experience, AI is playing a critical role in fraud detection, credit scoring, and loan approvals. The transformational impact is undeniable, but with great power comes significant risks.
As banks rush to adopt AI, driven by its potential and pressure to innovate, critical security considerations often lag behind. The sheer speed of adoption, particularly following breakthroughs like generative AI, can sometimes outpace the safeguards required to protect sensitive financial data. Banking, as one of the most regulated industries, cannot afford to compromise on security and compliance. Yet, in the race to innovate, are institutions unintentionally exposing themselves to unnecessary risks?
Data sovereignty and privacy are among the most pressing challenges. Banks deal with highly sensitive information, and regulations such as Singapore’s PDPA or Japan’s Act on the Protection of Personal Information (APPI) demand rigorous governance. While proofs of concept and AI demos often proceed without strict oversight, moving these systems into production raises questions about how well organizations are addressing compliance. The potential for training data to introduce biases or expose confidential information adds another layer of complexity that must be addressed early and comprehensively.
The supply chain security dilemma
AI applications are similar to modern applications in that they are built from multiple components, many of them open-source. While this modular approach enables faster development and innovation, it also introduces significant vulnerabilities. Open-source components can have hidden flaws or backdoors, making them attractive targets for cybercriminals.
The reliance on third-party APIs further compounds these risks, creating intricate supply chain architectures that are difficult to monitor and secure. The proliferation of shadow APIs and zombie APIs – a growing challenge in today’s application landscape – adds to this complexity. Without robust API security measures, financial institutions risk exposing sensitive data or compromising critical operations.
For banks, these risks are especially concerning because their AI applications rarely operate in isolation. They are deeply integrated with other enterprise systems, meaning a single vulnerability in the AI supply chain could ripple across the organization. Addressing these challenges requires a comprehensive approach to supply chain security, including stringent vetting of open-source components, continuous monitoring of APIs, and proactive threat detection.
Collaboration challenges: Aligning security with innovation
In many organizations, security teams operate in silos, disconnected from data and application teams. This misalignment leads to gaps in governance, where innovation outruns security protocols. Historically, similar issues arose between SecOps and application development teams, where security was often an afterthought, applied late in the development cycle. The introduction of DevSecOps bridged this gap by embedding security into the development pipeline, enabling continuous security checks with tools and automation.
A comparable disconnect now exists between SecOps and data teams working on AI applications. While data teams focus on optimizing pipelines and training models, SecOps teams prioritize protecting sensitive data and ensuring model integrity. AI introduces unique risks like model poisoning, adversarial attacks, and data leakage, which traditional security frameworks struggle to address. Unlike traditional applications, the lack of standardized frameworks for integrating security into AI workflows – akin to DevSecOps – makes collaboration even more challenging.
Bridging this divide will require cultural shifts, cross-functional collaboration, and new frameworks tailored to AI. Emerging concepts like “AI SecOps” or “DataSecOps” aim to integrate security across the AI lifecycle, embedding security into data pipelines and model development. Unified tools designed for AI workflows, capable of detecting bias, model drift, and inference attacks, will play a critical role. By aligning priorities and fostering collaboration, organizations can mitigate AI-specific risks while maintaining the pace of innovation.
Charting a balanced path between innovation and security
Building on this collaboration, the path forward for APAC banks lies in balancing their drive for innovation with a steadfast commitment to security. Organizations must adopt strategies that embed security into the development and deployment of AI solutions without stifling innovation. Phased rollouts, where systems are tested iteratively before production, can help identify vulnerabilities early. Adopting zero-trust architectures ensures strict access controls, safeguarding sensitive data at every stage.
Aligning with emerging regulatory frameworks will also be crucial as governments across the region introduce new compliance requirements for AI and data handling. Strategic partnerships with cybersecurity experts who understand the complexities of AI-driven applications can enhance this alignment, offering tailored solutions that secure the entire application lifecycle.
Ultimately, leveraging AI itself to enhance security represents a forward-looking approach. For instance, F5 uses AI to analyze existing policies and recommend actionable security enhancements tailored to individual needs. This kind of proactive, integrated strategy enables banks to push the boundaries of innovation while ensuring trust and compliance remain at the forefront. By fostering collaboration, adopting cutting-edge tools, and maintaining a focus on security by design, APAC banks can lead the charge in transforming the financial landscape safely and effectively.
Contributed by F5.