Bulwark Intelligence

PART 2: NATIONAL SECURITY, CYBER RISKS, GEOPOLITICAL, CAREER, AND EDUCATIONAL CONSEQUENCES OF ARTIFICIAL INTELLIGENCE-CHATGPT

The Geopolitical Upshots of Artificial Intelligence, ChatGPT

ChatGPT seem to be engendering geopolitical competition between world powers. Ideally, ChatGPT should be accessible anywhere in the world with internet connectivity, but this is far from the reality. Some countries, especially authoritarian regimes such as  China, Russia, Afghanistan, Iran, Venezuela, North Korea, implement censorship and surveillance to monitor internet usage and restrict the use of ChatGPT due to geopolitical and national security concerns. China leads the pack. Though not officially available in China, ChatGPT caused quite a stir there. Some users are able to access it using tools such as virtual private network (VPN) or third-party integrations into messaging apps such as WeChat to circumvent its censorship by the Chinese government.

Japan’s Nikkei news service reported that Chinese tech giants, Tencent and Ant Group were told not to use ChatGPT services on their platforms, either directly or indirectly because there seem to be a growing alarm in Beijing over the AI-powered chatbot’s uncensored replies to user queries. Writing on Foreign Policy, Nicholas Welch and Jordan Schneider cited a recent writeup by Zhou Ting (dean of the School of Government and Public Affairs at the Communication University of China) and Pu Cheng (a Ph.D. student) who argued that, ‘’the dangers of AI chatbots include becoming a tool in cognitive warfare, prolonging international conflicts, damaging cybersecurity, and exacerbating global digital inequality.

Zhou and Pu alluded to an unverified ChatGPT conversation in which the bot justified the United States shooting down a hypothetical Chinese civilian balloon floating over U.S. airspace yet answered that China should not shoot down such a balloon originating from the United States. According to Shawn Henry, Chief Security Officer of CrowdStrike, a cybersecurity firm, “China wants to be the No. 1 superpower in the world and they have been targeting U.S. technology, U.S. personal information. They’ve been doing electronic espionage for several decades now”.

A report from the cybersecurity company Feroot, said TikTok App can collect and transfer your data even if you’ve never used App. “TikTok can be present on a website in pretty much any sector in the form of TikTok pixels/trackers. The pixels transfer the data to locations around the globe, including China and Russia, often before users have a chance to accept cookies or otherwise grant consent, the Feroot report said’’. The top three EU bodies – European Parliament, European Commission, and the EU Council, the United States, Denmark, Belgium, Canada, Taiwan, Pakistan, India, Afghanistan, have all banned TikTok especially on government devices, citing cybersecurity concerns. New Zealand became the latest country on March 17 to announce the ban of TikTok on the phones of government lawmakers at the end of March 2023.

Not to be outflanked, Chinese company, Baidu is set to release its own AI-powered chatbot. Another Chinese e-commerce platform, Alibaba is reportedly testing ChatGPT-style technology. Alibaba christened its artificial intelligence language model: DAMO (Discovery, Adventure, Momentum, and Outlook). Another Chinese e-commerce says its “ChatJD” will focus on retail and finance while TikTok has a generative AI text-to-image system.

Education And Plagiarism In The Age of ChatGPT

The advent of ChatGPT unnerved some universities and academics around the world. As an illustration, a 2,000-word essay written by ChatGPT, helped a student get the passing grade in the MBA exam at the Wharton School of the University of Pennsylvania. Apart from the Wharton exam that ChatGPT passed with plausibly a B or B- grade, other advanced exams that the AI chatbot has passed so far include: all three parts of the United States medical licensing examination within a comfortable range. ChatGPT recently passed exams in four law school courses at the University of Minnesota. In total, the bot answered over 95 multiple choice questions and 12 essay questions that were blindly graded by professors.

Ultimately, the professors gave ChatGPT a “low but passing grade in all four courses” approximately equivalent to a C+. ChatGPT passed a Stanford Medical School final in clinical reasoning with an overall score of 72%. ChatGPT-4 recently took other exams, including Uniform Bar Exam, Law School Admission Test (LSAT), Graduate Record Examinations (GRE), and the Advanced Placement (AP) exams. It aced aforesaid exams except English language and literature. ChatGPT may not always be a smarty-pants, it reportedly flunked the Union Public Service Commission (UPSC) ‘exam’ used by the Indian government to recruit its top-tier officials.

Thus, several schools in the United States, Australia, France, India, have banned ChatGPT software and other artificial intelligence tools on school network or computers, due to concerns about plagiarism and false information. Annie Chechitelli, Chief Product Officer for Turnitin, an academic integrity service used by educators in 140 countries, submits that Artificial Intelligence plagiarism presents a new challenge. In addition, Eric Wang, vice president for AI at Turnitin asserts that, ‘’[ChatGPT] tend to write in a very, very average way’’. “Humans all have idiosyncrasies. We all deviate from average one way or another. So, we are able to build detectors that look for cases where an entire document or entire passage is uncannily average.

Dr. LuPaulette Taylor who teaches high school English at an Oakland, California is one of the those concerned that ChatGPT could be used by students to do their homework hence undermining learning.  LuPaulette who has taught for the past 42 years, listed some skills that she worries could be eroded as a result of students having access to AI programs like ChatGPT. According to her, “The critical thinking that we all need as human beings, the creativity, and also the benefit of having done something yourself and saying, ‘I did that’’.

To guard against plagiarism with ChatGPT, Turnitin recently successfully developed an AI writing detector that, in its lab, identifies 97 percent of ChatGPT and GPT3 authored writing, with a very low less than 1/100 false positive rate. Interestingly, a survey shows that teachers are actually using ChatGPT more than students. The study by the Walton Family Foundation found that within only two months of introduction, 51% of 1,000 K-12 teachers reported having used ChatGPT, with 40% using it at least once a week.

The wolf at the door: ChatGPT could make some jobs obsolete

There are existential worries that artificial intelligence, ChatGPT will lead to career loses. Pengcheng Shi, an associate dean at the department of computing and information sciences, Rochester Institute of Technology believes that the wolf is at the door and that ChatGPT will affect some careers. He affirms that the financial sector, health care, publishing, and a number of industries are vulnerable. According to a survey of 1,000 business leaders in the United states by resumebuilder.com, companies currently use ChatGPT for writing job descriptions (77 per cent), drafting interview questions (66 per cent), responding to applicants (65 per cent), writing code (66 per cent), writing copies/content (58 per cent), customer support (57 per cent), creating summaries of meetings or documents (52 per cent), research (45 per cent), generating task lists (45 per cent). Within five years, 63 per cent of business leaders say ChatGPT will “definitely” (32 per cent) or “probably” (31 per cent) lead to workers being laid off.

As a matter of fact, companies are already reaping the rewards of deploying ChatGPT: 99 per cent of employers using ChatGPT say they’ve saved money. When assessing candidates to hire, 92% of business leaders say having Artificial Intelligence /chatbot experience is a plus, and 90% say it’s beneficial if the candidate has ChatGPT-specific experience. It follows that job seekers will need to add ChatGPT to their skillset to make themselves more marketable in a post-ChatGPT world because as seen throughout history, as technology evolves, workers’ skills need to evolve.

Software engineering: Now that ChatGPT can seamlessly draft codes and generate a website, any person who hitherto earned a living doing such a job should be worried. On the contrary, Professor Oded Netzer of the Columbia Business School, reckons that AI will help coders rather than replace them.

Journalism: Artificial Intelligence is already making inroads into newsrooms, especially in newsgathering, copy editing, summarizing, and making an article concise.

Legal profession: Writing under the banner: ‘’Legal Currents and Futures: ChatGPT: A Versatile Tool for Legal Professionals’’, Jeanne Eicks, J.D., associate dean for Graduate and Lifelong Learning Programs at The Colleges of Law, posits that, ‘chatbots such as ChatGPT can support legal professionals in several ways such as streamlining communication and schedule meetings with clients and other parties involved in legal cases, automating creation of legal documents such as contracts and filings, conducting legal research and searching of legal databases, analyzing data and making predictions about legal outcomes.

Graphic design: An artificial intelligence tool, DALL-E, can generate tailored images from user-generated prompts on command. Artificial intelligence image generators such as Stable Diffusion, WOMBO, Craiyon, Midjourney, DALL-E (owned by OpenAI, the company behind ChatGPT), use computer algorithms and artificial intelligence via deep learning and analyzing from large datasets, creating a new insanely comprehensive image based on the prompted text. These tools will pose a threat to many in the graphic and creative design spaces.

Customer service agents: Robots and chatbots are already doing customer service jobs – chatting and answering calls. ChatGPT and related technologies could ramp up this trend. As a matter of fact, a 2022 study from the tech research company Gartner predicted that chatbots will be the main customer service channel for roughly 25% of companies by 2027.

Artificial intelligence and banking: In the financial or banking world, artificial intelligence or ChatGPT can be deployed in customer service, fraud detection, wealth management, financial planning, know your customer (KYC) and anti-money laundering (AML), customer onboarding, risk management.

Healthcare industry: Artificial intelligence (AI) has made significant advancements in the healthcare industry. ChatGPT can be deployed as a virtual assistant for telemedicine, remote patient monitoring, medical recordkeeping and writing clinical notes, medical translation, disease surveillance, medical education, and mental health support, identifying potential participants for clinical trials, triaging patients by asking them questions about their symptoms and medical history to determine the urgency and severity of their condition. Similarly, Dr Beena Ahmed, an Associate Professor at the University of New South Wales (UNSW), Australia, believes that artificial intelligence (AI) and machine learning systems could be used in the future to make predictions on specific health outcomes for individuals based on medical data collected from large populations thereby improving life expectancy.

Automobile industry: American automaker, General Motors is mulling the idea of deploying ChatGPT as a virtual personal assistant on its vehicles. American automaker, General Motors is mulling the idea of deploying ChatGPT as a virtual personal assistant on its vehicles. For instance, if a driver got a flat tire, they could ask the car to play an instructional video inside the vehicle on how to change it. Hypothetically, a diagnostic light could pop up on a car’s dashboard and a motorist will be able to dialogue with the digital assistant whether they should pull over or keep driving to deal with it the issue when they get home. It might even be able to make an appointment at a recommended repair shop.

ChatGPT is Revolutionizing Supply Chain Management: ChatGPT is anticipated to be a game-changer for supply chain management. The use of machine learning for supply chain management will entail: forecasting, inventory optimization, and customer service.

The implication of the foregoing is that the most resilient careers will be those that require a face to face interaction and physical skills that AI cannot replace. Hence, trades, such as plasterers, electricians, mechanics, etc., and services – everything from hairdressers to chiropodists – will continue to rely on human understanding of the task and human ability to deliver it.

Cybersecurity And The Dark Side of Artificial intelligence

  • Phishing emails: A phishing email is a type of malware wherein the attacker crafts a fraudulent, yet believable email to deceive recipients into carrying out harmful instructions. These instructions can be by clicking on an unsecured link, opening an attachment, providing sensitive information, or transferring money into specific accounts.
  • Data theft: Entails any unauthorized exfiltration and access to confidential data on a network. This includes personal details, passwords, or even software codes – which can be used by threat actors in a ransomware attack or any other malicious purpose.
  • Malware: Or Malicious software, is a broad term referring to any kind of software that intends to harm the user in some form. It can be used to infiltrate electronic devices, servers, steal information, or simply destroy data.
  • Botnets: A botnet attack is a targeted cyber-attack during which a group of devices – computers, servers, amongst others that are all connected to the internet are infiltrated and hijacked by a hacker.

The Positive Side of ChatGPT Towards Enhancing Cybersecurity

On the flip side, ChatGPT, like other large language models, can be deployed to enhance cybersecurity. Some examples include:

  • Phishing detection: ChatGPT can be trained to identify and flag potentially malicious emails or messages designed to trick users into providing sensitive information.
  • Spam filtering: ChatGPT can be used to automatically identify and filter out unwanted messages and emails, such as spam or unwanted advertising.
  • Malware analysis: ChatGPT can be used to automatically analyze and classify malicious software, such as viruses and trojans.
  • Intrusion detection: ChatGPT can be used to automatically identify and flag suspicious network traffic, such as malicious IP addresses or unusual patterns of data transfer.
  • Vulnerability Assessment: ChatGPT can be used to automatically analyze software code to find and report vulnerabilities, such as buffer overflow attacks.

ChatGPT And Privacy Concerns

Data privacy: ChatGPT is unarguably a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. More than 4% of employees have [in]advertently put sensitive corporate data into the large language models (LLM) such as ChatGPT, raising concerns that its popularity may result in massive leaks of proprietary information if adequate data security is not in place. This explains why some global institutions including JP Morgan, KPMG have blocked the use of ChatGPT while others like Accenture are instructing their teams to be cautious with how they use the technology because of privacy concerns. In a similar vein, British law firm with international footprint, Mishcon de Reya LLP, banned its lawyers from typing client data into ChatGPT over privacy and security fears.

Conclusion:

It is obvious that ChatGPT has far-ranging implications and ramifications for national security, open source intelligence (OSINT) gathering, education, research and on the workforce. But the most alarming ramifications of such a technological innovation will be seen in areas of disinformation, cyber-crime, because ChatGPT carries with it a tremendous risk of misuse, and this will be buoyed by paucity of regulatory framework. There’s always a fear that government involvement can slow innovation.

However, I agree with the submissions of Sophie Bushwick and Madhusree Mukerjee on Scientific American where they advocated inter-alia, for regulation and oversight over the use of artificial intelligence. They contend that, ‘’overly strict regulations could stifle innovation and prevent the technology from reaching its full potential. On the other hand, insufficient regulation could lead to abuses of the technology’’. Hence, it is important to strike the right balance.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top