Photo by Ales Nesetril on Unsplash
In September of 2008, with $639 billion in assets acquired over 168 years, US investment bank Lehman Brothers filed for bankruptcy. The largest bankruptcy in US history to date. The unmoderated gambling in the US real estate market had added another entity to its mass grave. One that contained both institutions and individual investors alike.
The ensuing economic downturn quickly traversed across the globe, ushering in what has become known as the Great Recession. With consumer sentiment at rock bottom, governments across the world underwent significant regulatory reform to improve resiliency against potential future crises.
In the UK, this included a push to diversify the financial landscape, allowing more competition to enter the market. Previously, only five banks dominated the sector, controlling 73% of the market share: Lloyd's Banking Group, RBS, HSBC, Barclays, and Santander.
Photo by Gilles Lambert on Unsplash
In 2010, Metro Bank became the first high street bank to be granted a license to operate in the UK in more than 150 years. Metro, along with its predecessors such as Revolut and Monzo, have been coined as “challenger banks” as they are disrupting the traditional approach taken by the old-guard financial giants.
The success of these companies, that are still in their relative infancy, can be attributed to their embracement of modernity. “Fintech”, a portmanteau of “finance” and “technology”, represents a new industry within the financial sector. This new field encompasses the challenger banks as well as “neobanks”, entities that have abandoned physical locations entirely in favor of a purely digital presence.
By leveraging modern technology, fintech companies have reduced customer friction in regard to accessibility by providing mobile and web applications that enable them to perform financial transactions remotely, at any time. This focus on the customer experience has recaptured the audience, restoring, or at least improving, the public perception of the banking industry.
As of December of last year, the UK is home to 3,316 fintech companies, and their encroachment into the regional market is noticeable:
Yet another field that has experienced remarkable growth is the AI industry. The UK's AI market, currently valued at over £21 billion, is projected to soar to £1 trillion by 2035. Over the past decade the number of AI-focused companies within the UK has surged by more than 600%. With its unmatched capacity to process data, AI holds immense automation promise across a wide range of industries, and the fintech industry is no exception.
According to a survey conducted by the Bank of England, as of November of 2024, 75% of financial service firms have already implemented AI into their operations, with a further 10% planning to do so over the next three years. Some of the promising use cases of AI in fintech include:
Those that fail to adopt will be left behind, as the seamless user experience and security improvements backed by AI integration will become an expectation rather than a cutting-edge feature.
In contrast to the benefits that AI will provide, its presence also expands the attack surface of organizations. Just as AI's power can be leveraged for improving operations, it can also be weaponized by malicious actors in order to identify and exploit vulnerabilities in both systems and the human mind. While regulations have addressed the risk of a repeat of the 2008 credit crunch, AI introduces a new cybersecurity threat, in lucrative targets attractive to cybercriminals.
A new iteration of spear-phishing attacks, known as "deepfake" scams, have been unleashed upon the digital realm, resulting in millions of dollars in theft. Generative AI, a classification of AI that refers to models that generate text, images, audio, and videos is now being employed to produce fabricated content that is used to trick targets into transferring large sums of money.
Just last year, British engineering firm Arup fell victim to a deepfake scam in which an employee was instructed in a video conference call to distribute £21 million to attacker controlled bank accounts. The attack used highly convincing deepfake videos to impersonate the employee's coworkers and the company's CFO, making the fraudulent request appear completely legitimate.
Ranked by industry, fintech is one of the top targets of deepfake scams. Analysis performed by Sumsub found that the global number of deepfake incidents in fintech soared by 533% between the first quarter of 2023 to 2024. In total, the financial sector will lose an estimated $40 billion in AI-driven fraud by 2027.
Yet, while disastrous, these AI-powered social engineering attacks aren't the only concern. AI itself is subject to a number of potential vulnerabilities that can be attributed to deceptive prompts, coding errors, resource exhaustion, and tainted supply chains.
Hackers are also integrating AI into a number of programs already included in their toolkits to bolster their effectiveness. Tools that are used to detect security vulnerabilities, misconfigurations, and exposed services across web applications, infrastructure, cloud environments, and networks.
When combined with information obtained by Internet monitoring platforms like Shodan, these instruments can easily detect a massive quantity of vulnerable devices in short order, exposing millions to compromise.
Photo by blurrystock on Unsplash
A “hackbot” is a tool that leverages AI, incorporating machine learning techniques, Large Language Models (LLMs), and computer-use abilities in order to detect and exploit vulnerabilities autonomously in blackbox scenarios.
Tried-and-tested fuzzing tools that are used in target enumeration, such as ffuf, have been forked and supercharged by AI wrappers. Nuclei, a vulnerability scanning engine, now has an AI infused browser extension.
Paid subscriptions for Caido, an HTTP proxy tool used for security assessments of web applications, come with the Assistant, an LLM specifically tailored for security research. Through prompts, the Assistant is able to accurately describe HTTP traffic, suggest attack vectors, and even generate proof-of-concept scripts for Cross-Site Request Forgery attacks. Tasks that popular models will refuse to do. In addition to this built-in LLM, Caido users can also install Shift, an AI plugin that automates many testing techniques and also suggests attack vectors.
Although AI has increased the danger that these tools can pose, the damage is only realized when they are in the wrong hands. Not all hackers operate maliciously. There also exists ethical hackers, those that find weaknesses and report them to responsible disclosure or bug bounty programs before they are exploited by those with ill-intent.
Security researcher and full-time bug bounty hunter, grepme, is one such ethical hacker that is helping companies secure their assets every day. Based in the UK, he is currently the top-ranked hacker in Tide’s bug bounty program.
When asked about how he has incorporated AI into his workflow, he stated:
“AI driven recon can be a pretty handy addition to anyone’s toolkit. There’s a lot of lifting and quality of life improvements it can do for you - eyeballer is a great example of this for recon. It can screenshot and categorise web applications with its built in LLM capability to save you sifting through thousands of screenshots. LLM capability also goes far beyond that and can be incredibly useful for JS analysis. More often than not I’ll need endpoints, parameters and context extracted from JS on a web server. Being able to feed parts of JS into a model and extract useful information like that almost simultaneously reduces a lot of friction in anyone’s workflow Using LLMs for threat modelling can also be great. Once I’ve thoroughly understood an application, threat modelled it and understood as much context as possible, I’ll often use a few different models for attack vector ideation and for vectors I might not have thought about.”
In response to asking if he believes malicious hackers are using AI in a similar way, grepme says: “Absolutely. There's some very real and scary use cases for red teaming and voice cloning to bypass all sorts of controls and phish people. AI and LLM use cases work for attackers and defenders alike, threat actors will almost certainly be capitalizing on the benefits of using one.”
Ethiack’s platform delivers real offensive security. We combine AI pentesting agents with human intelligence to simulate attacks continuously across your digital environment. Our agents continuously mimic real-world techniques to uncover and validate vulnerabilities with proof-of-exploit. Human intelligence steps in where creativity and context are needed, ensuring high-value assets are tested with real attacker logic. Ethiack integrates with CI/CD workflows, supports compliance (NIS2, DORA, CSA), and provides prioritised risks and actionable guides. We help our customers to move from alerts to actions, reducing noise, false positives, and costs. Our mission objective is grand, but we believe it is possible: secure the whole internet.
Currently, our Artihackers, in tandem with the leading human ethical hackers in the world, have submitted over 60,000 valid vulnerability reports in an effort to improve the security posture of organizations around the globe. The Ethiack team has helped secure giants such as Aegon Santander, PROZIS and ANA Aeropuortos de Portugal, and we have earned numerous awards, including the prestigious "Most Valuable Hacker" title.
Are you ready to move to next gen OffSec? Then why not try continuous AI Pentesting yourself, with our 30-day free trial.