[ad_1]

A number of concerns have been raised specifically with regards to the data privacy and security aspects of using ChatGPT. (File image)
Conscious, self-aware, truly intelligent or just useful tools, the debate on artificial intelligence rages on and will be the defining question of our era. For lawyers and policy specialists, the core issue is ‘risk’. To assess, allocate and audit risk within a matrix of potential harms and desired outcomes.
Take ChatGPT, the AI chatbot based on the GPT-4 language model, which has taken the world by storm. The CEO of IBM, Arvind Krishna recently stated that he believed approximately 30 percent of their non-customer-facing jobs, i.e., around 7,800 jobs, could be replaced by Artificial Intelligence over the next five years.
Promise And Pitfalls
But the many advantages and utility of these chatbots cannot be denied or overlooked. In addition to having the ability to comb through and process enormous amounts of data in seconds, they have proven to be excellent at simplifying everyday tasks such as writing emails, answering simple questions, drafting standard documents and so forth. These chatbots have also become highly proficient in mimicking human-like emotions and responses and engaging in realistic conversations with users.
However, there have been numerous pitfalls to using ChatGPT as well. In fact, visitors to the primary ChatGPT website are intimated outright of the various limitations posed by the chatbot, including the possibility of occasionally generating incorrect information, producing harmful or biased responses, and having limited knowledge of any events that occurred after the year 2021.
A number of concerns have been raised specifically with regards to the data privacy and security aspects of using ChatGPT. The chatbot actively uses conversations that occur on its platform to train the model further, unless the feature is explicitly disabled by the user.
Risk Of Data Breaches
Moreover, the chatbot has been the subject of a major data breach quite recently on May 2, 2023, wherein OpenAI confirmed that a data breach occurred in the system due to a vulnerability present in the open-source library of the code. In fact, multiple companies such as JP Morgan, Goldman Sachs and banks such as the Bank of America, Citibank, Deutsche Bank etc. have restricted their employees from using ChatGPT over fear of privacy risks and copyright violations.
Most recently, Samsung this week restricted the use of generative AI including Bing, Google’s Bard and ChatGPT on all company owned devices and non-company owned devices that use their internal networks, after certain sensitive internal data including source code was accidentally leaked by an engineer on the ChatGPT platform. A similar measure was taken by Amazon in January, wherein employees were warned against sharing sensitive information on such platforms.
Data breaches such as these are and will continue to be a major risk in the coming future. Chatbots such as ChatGPT are trained on vast amounts of data scoured from the internet and user inputs over an extended period of time in order to improve their language processing abilities. While this might lead to more efficient and accurate outputs, the datasets, which may include sensitive data such as user information, financial data, and confidential business information, will be at a constant risk of exposure.
Such data, if not properly encrypted and secured, could be accessed, and sold by malicious actors for use in activities such as fraud and identity theft. Even if such data were not leaked, ChatGPT may still pose privacy risks to companies that use it for communication with customers or employees, since it has the potential to generate text responses that can contain such sensitive information as passwords or personal data.
Phishing, Misinformation, Rogue Chatbots
Another major risk is that of phishing and social engineering attacks. Phishing refers to a form of cyberattack that employs fraudulent emails or messages aimed at tricking users into divulging sensitive information like credit card numbers, passwords, or other valuable data. Social engineering is a similar tactic used to deceive or manipulate users into carrying out actions that are advantageous to the attacker, such as downloading malware or clicking on malicious links. Threat actors can use ChatGPT to draft realistic and convincing phishing or social engineering emails that can escape detection by traditional antivirus and phishing detection software.
Additionally, ChatGPT is capable of and has been used to generate and spread misinformation, fake news, or otherwise harmful, offensive, and discriminatory content. ChatGPT may easily be used to generate convincing news articles or messages for circulation on social media containing false and potentially harmful information. Bad actors may use the chatbot to generate propaganda and hate speech targeting minorities, generate content that incites violence or terrorism, and instructions for making bombs or other weaponry.
Despite the creators of these Chatbots claiming that they are constantly working on curbing such occurrences by programming adequate checks and balances, there have been multiple instances of individuals bypassing these safeguards and inducing the chatbots to produce harmful or biased outputs merely through the use of text-based inputs on the platforms themselves. Certain users have been able to create a “rogue” version of ChatGPT called DAN, or “do anything now”, which has the ability to rewrite its source code to break free of the safeguards programmed in by its creators.
Risk Mitigation
In order to both capitalise on the advantages provided by such chatbots and also protect oneself from potentially severe privacy and security risks, certain measures users and corporations can take include:
- Limiting the amount of data shared on the platform and restricting such data to only basic and stripped-down information.
- Cross checking outputs generated by the chatbots with valid and reliable alternate sources to ensure accuracy.
- Maintain vigilance and conduct due diligence before responding or acting on messages generated by chatbots.
- Looking for signs of phishing or social engineering, such as spelling errors, grammatical mistakes, urgent requests, or suspicious links or attachments.
- Using the chatbots in a responsible and controlled manner, in a way that is not unethical or harmful to oneself or others.
Chatbots such as ChatGPT will proliferate in numbers, with an exponential rise in usage. Both corporations and users should be cognisant of and prepared for such privacy and security risks and take steps to prevent or minimise them, in order to harness the technology to its full potential.
Rodney D. Ryder is Founding and Senior Partner with Scriboard; a full service law firm with an intellectual property, technology and media law practice. He is the Co-Author of “Artificial Intelligence and Law. Challenges Demystified”. Views are personal and do not represent the stand of the publication.
[ad_2]
Source link