Introduction
1. Good morning to those of you from RSIS, and to Mr Leo Yip, Head of Civil Service, and a very warm welcome, in particular, to our guests from overseas. Welcome to the opening ceremony of the 16th Asia-Pacific Programme for Senior National Security Officers, or APPSNO.
2. This year’s APPSNO is taking place amid an international environment that is unfortunately far more troubled and turbulent than what we have experienced or ever imagined in recent years. The rules-based global order that had prevailed for the past 80 years is under threat, and the international system seems to be degenerating into the “law of the jungle”, where might means right.
3. The rivalry between large countries has sharpened, and a focal point for this strategic contestation has been emerging technologies, such as Artificial Intelligence (AI), quantum computing and biotechnology.
4. These technologies, if used as a force for good, holds great promise to transform the way we think, live, and work. They can boost countries’ economic and strategic advantage. Unfortunately, at the same time, they can be weaponised by both state and non-state actors, to pose serious national security challenges.
Emerging Technology Risks and National Security
5. This morning, I’d like to touch on three national security risks: disinformation, cyber threats, and biological weapons.
Disinformation
6. First, on disinformation. Advances in generative AI have made it very easy to produce false but highly believable content at speed and scale.
7. This has the potential to supercharge online disinformation campaigns, undermining our democratic processes and igniting social unrest.
(a) Some examples: Two days before the 2023 parliamentary elections in Slovakia, a deepfake audio clip was posted online, purportedly of the leader of a political party discussing election fraud. It is not clear how many voters were swayed by the deepfake audio clip, but the party leader eventually lost the elections despite having led in the polls earlier.
(b) In another example, deepfake images of bloodstained streets were spread online during clashes between protestors and security forces in Pakistan last year. While some lives were indeed lost due to the conflict, the deepfake images claimed to show the massacre of hundreds of protestors by security forces, thereby inciting further violence.
8. As AI becomes more advanced, it will become more difficult to tell what is real from what is fake. This also poses a problem for biometric security systems, which we rely on to secure our critical systems and national borders.
(a) For example, the cybercrime group GoldFactory has used face-swapping technology to fool banks’ facial recognition authentication systems and gain access to victims’ bank accounts.
(b) Hostile actors have also used face-morphing tools to generate fake passport photos that are a composite of multiple individuals. This allows them to travel under alternate identities and evade facial recognition checks at border control. In 2021, Slovenia reported more than 40 cases of face-morphing being used by Albanians to obtain Slovenian passports.
9. So, what can we do about the threat of disinformation? Working with social media companies to label AI-generated content is one potential solution. Singapore will introduce a new Code of Practice under the Protection from Online Falsehoods and Manipulation Act, to require major social media services to label deepfake content on their platforms.
10. Furthermore, during time-sensitive periods such as elections or fomenting social unrest, Governments must have the tools to take down disinformation swiftly, before the falsehoods go viral and the situation spirals out of control. That is why Singapore has passed legislation to ban deepfakes of candidates during election periods, and to quickly take down inflammatory content, including AI-generated content, on race or religion. [1] I believe that all reasonable citizens of the state would agree that this is necessary to ensure that there are fair and just elections.
11. But very importantly, we need to, at the same time, build an aware and vigilant public that can critically evaluate information and protect themselves against disinformation. This involves all stakeholders, including Governments, private companies, and the public. We all need to work together to build our collective resilience to disinformation.
Cyberthreats
12. I would now like to touch on cyberthreats, which have also been exacerbated by AI.
13. Researchers have warned that cybercriminals are using GhostGPT, a new malicious Generative AI chatbot, to code malware, develop exploits and write phishing emails. The use of generative AI has reduced the level of coding ability needed for cyberattacks. This has lowered the barriers to committing such attacks.
14. The misuse of AI by novice cybercriminals is already concerning. But what is even more alarming is how expert cybercriminals can leverage on AI to enhance the speed, scale, and sophistication of cyberattacks. For instance, AI has reportedly been used to enhance Distributed Denial of Service (DDoS) attacks by optimising attack strategies and dynamically adjusting tactics to evade defences. These attacks could cause serious disruptions to critical infrastructure such as power grids, financial systems, and communications networks, crippling our society and our way of life.
15. Such AI-driven cyberattacks may target multiple domains at the same time. There could be a multi-pronged attack on our healthcare, transport, and financial systems. Hence, there is a strong need to strengthen whole-of-government coordination against such attacks. At the same time, the transboundary nature of cyberattacks necessitates closer collaboration between countries to detect, share information where possible and respond to these attacks.
16. Apart from AI-driven cyberattacks, another area to watch out for is quantum computing. Quantum computers are expected to break many classical encryption algorithms that we rely on today to protect everything from financial transactions to sensitive government communications. This will be game-changing for cybersecurity.
Biological Weapons
17. Lastly, I’d like to touch on biotechnology. Advances in gene editing technologies have enabled manipulation of the genetic code with unprecedented precision. This can be misused. As we saw during the COVID-19 pandemic, economies and societies may be forced to shut down to contain the spread of the disease, and countless lives may be lost.
18. The threat of biotechnology falling into the wrong hands is compounded because these tools are becoming increasingly accessible and affordable. For instance, gene sequences of biological agents are readily available on publicly accessible databases and can be easily purchased from commercial synthetic DNA companies. This means that hostile actors can now manufacture biological weapons and toxins at relatively low cost and with minimal technical expertise.
19. Guarding against the threat of biological weapons will require close international collaboration. Today, almost all countries have ratified the UN Biological Weapons Convention, which requires member countries to prevent the development, production, acquisition, transfer and use of biological weapons within their jurisdictions. This is a good starting point. But we must all keep abreast of developments in biotechnology and hostile actors’ modus operandi, to ensure that our security measures keep up with the evolving threat landscape.
Conclusion
20. It seems like a lot to think about on a Monday morning. But it only goes to show the level and broad scope of threats that we have to deal with, especially in this turbulent world. I think Singapore is very happy that we are a convening point for important security officials such as yourselves, that we can all come together to discuss these threats that we are facing, not only in the region but across the world. I hope that when we come together, we can think through, share best practices and ideas, and at the same time, forge strong bonds and friendships, and that these relationships will go beyond the ambit of this conference.
21. Thank you very much, again, for being here.
[1] Elections (Integrity of Online Advertising) (Amendment) Bill 2024 and Online Criminal Harms Act (OCHA) 2023 respectively.