Artificial intelligence (AI) refers to the simulation of human intelligence in machines or computer systems that are programmed to think like humans and mimic their actions. Similarly, ChatGPT (Chat Generative Pre-trained Transformer) is an online artificial intelligence chatbot that is trained to have human-like conversations and generate detailed responses to queries or questions. ChatGPT became a blockbuster and a global sensation when it was released in November 2022. According to web traffic data from similarweb.com, OpenAI’s ChatGPT surpassed one billion page visits in February 2023, cementing its position as the fastest-growing App in history. In contrast, it took TikTok about nine months after its global launch to reach 100 million users, while Instagram took more than two years.
Users of ChatGPT span the world, with the United States having the highest number of users, accounting for 15.73% of the total. India is second with 7.10%. China is 20th with 16.58%, Nigeria is 24th with 12.24% while South Africa is 41st with 12.63%, as at March 14, 2023. OpenAI, the parent company of ChatGPT is currently valued at about $29 billion. Feedback and reactions from academics, global and business leaders to artificial intelligence tools like ChatGPT, is mixed. The likes of Bill Gates agree that ChatGPT can free up time in workers lives by making employees more efficient. A research titled – ‘’Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence’’ – by two economics PhD candidates at the Massachusetts Institute of Technology (MIT) attests that using ChatGPT made white collar work swifter with no sacrifice in quality and then made it easier to “improve work quickly”. Israeli president, Isaac Herzog, recently revealed that the opening part of his speech was written by artificial intelligence software -ChatGPT.
Lately, Bill Gates and the UK Prime Minister Rishi Sunak, were reportedly grilled by AI ChatGPT during an interview. Similarly, the United States Department of Defense (DoD) enlisted ChatGPT to write a press release about a new task force exploiting novel ways to forestall the threat of unmanned aerial systems. On the flip side, Elon Musk, is of the opinion that ‘’artificial intelligence is the real existential risk to humankind’‘. According to Musk, ”artificial intelligence will outsmart humanity and overtake human civilization in less than five years’’.
Theoretical physicist and one of Britain’s pre-eminent scientists, Professor Stephen Hawking, seem to agree with Elon Musk. He warned in in 2014 that artificial intelligence could spell the end of the human race. In the words of Hawking, “Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate’’. He went further to asset that, ‘’Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded’’. Cybersecurity experts from the National Cyber Security Centre (NCSC), a branch of the United Kingdom’s Spy agency, Government Communications Headquarters (GCHQ), says artificially intelligent chatbots like ChatGPT pose a security threat because sensitive queries, including potentially user-identifiable information could be hacked or leaked.
Underlying Principle And Modus Operandi of ChatGPT
ChatGPT is essentially a large language model (LLM). A large language model, or LLM, is essentially a deep learning algorithm that can recognize, summarize, translate, predict, and generate text and other content based on knowledge gained from massive datasets. On March 14, 2023, OpenAI officially announced the launch of the large multimodal model GPT-4. A major difference between GPT-3.5 and GPT-4 is that while GPT-3.5 is a text-to-text model, GPT-4 is more of a data-to-text model. Additionally, GPT-3.5 is limited to about 3,000-word responses, while GPT-4 can generate responses of more than 25,000 words and is more multilingual. GPT-4 is 82% less likely to respond to requests for disallowed content than its predecessor and scores 40% higher on certain tests of factuality. GPT-4 is also more multilingual and will let developers decide their AI’s style of tone and verbosity.
Basically, Large Language Models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence. To give an idea, ChatGPT model is said to have been trained using databases from the internet that included a massive 570GB of data sourced from books, Wikipedia, research articles, web texts, websites, other forms of content and writing on the internet. Approximately 300 billion words were reportedly fed into the system. This is equivalent to roughly 164,129 times the number of words in the entire Lord of the Rings series (including The Hobbit).
Currently, ChatGPT has very limited knowledge of the world after 2021 hence is inept at answering questions about recent or real-time events. In addition to English language, ChatGPT understands 95 other languages spoken around the world including French, Spanish, German, and Chinese. Unfortunately, ChatGPT does not currently recognize any Nigerian local language. ChatGPT uses speech and text-t-speech technologies which means you can talk to it through your microphone and hear its responses with a voice. It is important to note that the ChatGPT model can become overwhelmed, generate incorrect information, repetitions or unusual combinations of words and phrases.
This is because ‘’Large language models like ChatGPT are trained to generate text that is fluent and coherent, but they may not always be able to generate responses that are as nuanced or creative as those written by a human. A blog post by Dr. David Wilkinson, a lecturer at Oxford university and editor-in-chief of The Oxford Review, purports that ChatGPT appears to be making up academic references. In essence, Wilkinson counsels that just because it’s coming out of ChatGPT doesn’t mean it’s right. ‘’You need to be very careful about what you’re doing, particularly in academic circumstances, but also professional’’.
How AI-Powered ChatGPT Sparked A Chatbot Arms Race
After ChatGPT went viral, major tech companies around the world started scrambling to deploy generative artificial Intelligence. Sequel to speculations that the release of ChatGPT could disrupt and upend the search engine business, Google reportedly triggered a “code red”, summoning founders Sergey Brin and Larry Page back to the company. Soon after, Google launched an experimental AI-powered chatbot called Bard which is powered by its Language Model for Dialogue Applications (or LaMDA for short). Google is said to be utilizing artificial intelligence to improve its search features, including the popular Google Lens and the new multisearch feature.
Google’s parent company – Alphabet lost $163bn in value after the search engine’s new chatbot – Bard, answered a question about the James Webb telescope incorrectly during a demo. Microsoft recently imbedded ChatGPT technology in its product suit including Microsoft 365 – Word documents, Excel spreadsheets, PowerPoint presentations, and Outlook emails. Not to be outflanked, Apple, Meta (parent company of Facebook), and Amazon plunged their generative artificial intelligence. Apart from Microsoft Bing, other artificial intelligence-powered search engines giving Google a run for its money include Wolfram Alpha, You.Com, Perplexity AI, Andi, Metaphor, Neeva, amongst others.
Artificial Intelligence (ChatGPT) And National Security Concerns
No technological innovation is without some downside, especially a nascent and evolving technology. Just recently, the United Kingdom’s intelligence, security, and cybersecurity agency, the Government Communications Headquarters, commonly known as GCHQ, warned that Artificial Intelligence powered chatbots like ChatGPT are emerging security threats. The GCHQ says companies operating the technology – like Microsoft and Google – are able to read questions typed into the chatbots. United States Congressman Ted Lieu recently wrote an opinion piece in the New York Times where he opined his enthusiasm about Artificial Intelligence (AI) and “the incredible ways it will continue to advance society”, but also said he’s “very concerned about AI, particularly uncontrolled and disorganized AI”. Ted ”Imagine a world where autonomous weapons would roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks’’.
According to Steven Stalinsky, the executive director of Middle East Media Research Institute (MEMRI), ‘’it is not a question of whether terrorists will use Artificial Intelligence (AI), but of how and when’’. He noted that terrorists currently use technology, including cryptocurrency for fundraising and encryption for communications; AI for hacking and weapons systems, including drones and self-driving car bombs, bots for outreach, recruitment, and planning attacks. Stalinsky buttressed that, ‘’Jihadis, for their part, have always been early adopters of emerging technologies: Al-Qaeda leader Osama bin Laden used email to communicate his plans for the 9/11 attacks. American-born Al-Qaeda ideologue Anwar Al-Awlaki used YouTube for outreach, recruiting a generation of followers in the West. Indeed, by 2010, senior Al-Qaeda commanders were conducting highly selective recruitment of “specialist cadres with technology skills“
As a matter of fact there is growing concern that ChatGPT could make disinformation campaigns and political interference more widespread and realistic than ever. The CEO, Sam Altman of OpenAI, the company that created ChatGPT acknowledged that artificial intelligence technology will reshape society but acknowledges that it comes with real dangers such as – large-scale disinformation. It is feared that non-state actors and authoritarian regimes could exploit ChatGPT to pollute public spaces with toxic elements and further undermine citizens’ trust in democracies. To this end, Adrian Joseph, British Telecom’s chief data and artificial intelligence officer said ‘’the United Kingdom needs to invest in and support the creation of a British version of ChatGPT, BritGPT, or the country would risk national security and declining competitiveness.
The head of the FBI, Christopher Wray told a House Homeland Security Committee hearing that the Agency has “national security concerns” about TikTok, warning that the Chinese government could potentially use the popular video-sharing app to harvest data on millions of Americans thereby potentially compromising personal devices or influencing operations if they elect to do so.
Open Source Intelligence Analysis in the Age of Artificial Intelligence – ChatGPT
The former director of the US National Geospatial-Intelligence Agency, Robert Cardillo predicted years ago that bots would soon be analyzing most of the imagery collected by satellites and replace many human analysts. According to a report, ‘’artificial intelligence and automation will plausibly perform 75 percent of the tasks currently done by the new front line of American intelligence spies – the analysts who collect, analyze, and interpret geospatial images beamed from drones, open source intelligence, reconnaissance or intelligence satellites and other feeds around the globe’’. Perhaps this is why the United States Central Intelligence Agency (CIA) is exploring using chatbots and generative artificial intelligence capabilities to assist its officers in completing day-to-day job functions and their overarching spy missions.
Nevertheless, Patrick Biltgen, principal at the defense and intelligence contractor Booz Allen Hamilton is one of those who doesn’t foresee artificial intelligence or ChatGPT upstaging or putting human intelligence analysists out of jobs, at least for now. According to Biltgen, ‘’a lot of Artificial Intelligence-aided reporting today is very formulaic and not as credible as human analysis’’. He asserts that, a ChatGPT for national security analysis will have to be pre-trained “with all the intelligence reports that have ever been written, plus all of the news articles and all of Wikipedia’’. Says, “I don’t believe you can make a predictor machine, but it might be possible for a chatbot to give me a list of the most likely possible next steps that would happen as a result of his series of events.” Biltgen went further to say that ‘’on the other hand, Intelligence analysts are building upon their knowledge of what they have seen happen over time.”
Amy Zegart, a senior fellow at Stanford’s Freeman Spogli Institute for International Studies and Hoover Institution, as well as chair of the HAI Steering Committee on International Security, believes that in the new era of artificial intelligence, ”sophisticated intelligence can come from almost anywhere – armchair researchers, private technology companies, commercial satellites and ordinary citizens who livestream on Facebook”. Amy asserts that, ‘’Human intelligence will always be important, but machine learning can free up humans for tasks that they’re better at. She says, while ‘’satellites and artificial intelligence algorithms are good at counting the number of trucks on a bridge, they can’t tell you what those trucks mean. You need humans to figure out the wishes, intentions, and desires of others. The less time that human analysts spend counting trucks on a bridge. There is a vast amount of open-source data, but you need artificial intelligence to sift through it’’.
According to the MI5 Director-General, Ken McCallum, “The UK faces a broader and more complex range of threats, with the clues hidden in ever-more fragmented data’’. For instance, artificial intelligence can be used to scan, triage images, and identify dangerous weapons. In this light, British MI5 is partnering with the Alan Turing Institute aimed at ‘’applying artificial intelligence (AI) and data science to provide new insights, confront and mitigate national security challenges to the United Kingdom’’.
ChatGPT is an incredibly powerful tool that is revolutionizing OSINT investigations. It can automate repetitive tasks, such as data collection, analyzing and extracting information from large amounts of text unstructured data from various sources, thereby making investigations more efficient and effective. This allows investigators to focus on what really matters so as to verify the information properly. For example, an investigator can automate or instruct ChatGPT to ‘’provide a list of all known aliases and social media accounts associated with the individual named [insert name]” or to ‘’extract all posts and comments made by the individual named [insert name] on the social media platform [insert platform]”.
In addition, ChatGPT can be used for social network analysis, generating leads and connections between individuals and organizations. For instance, an investigator can instruct ChatGPT to ‘’find connections between individuals and organizations based on their online presence and interactions, and list them”. Google Dorking, also known as Google Hacking, is a technique used by sleuths in OSINT (Open-Source Intelligence) investigations to search for specific information on the internet using advanced search operators. This technique allows investigators to search specific parts of a website or narrowing search results to a specific file type.
Research shows that 80 percent of information processed by intelligence and analytical bodies in United Nations peace operations originate from publicly available information (PAI), or loosely – open source intelligence (OSINT), making OSINT a discipline that significantly dominates other intelligence disciplines (Nikolić 2017). For instance, by ‘’tracking and constantly monitoring social media, analysts can gain real-time insights into public opinion even in volatile environments’’.
The United Nations is exploring the deployment of artificial intelligence, particularly machine learning and natural language processing, for noble purposes of peace, security, and conflict prevention. Three ways come to mind vis-à-vis overcoming cultural and language barriers, anticipating the deeper drivers of conflict, and advancing decision making. An artificial intelligence system was recently used by the United Nations Support Mission in Libya (UNSMIL) to test support for potential policies, such as the development of a unified currency.