The growing attractiveness of AI companies and a considerable takeover of AI technologies in our daily lives is not empty of risks. In other news, Open AI, the brains behind the ChatGPT, has suffered multiple data breaches that exposed internal secrets, the design of AI products, and customer information.

While OpenAI did not give any instant response to its major ChatGPT breach, it certainly raised many alarms across the internet and AI community. Despite the limited scope of the incident, it still serves as a stark reminder of the increasing value and vulnerability of these AI companies. In short order, these AI companies made themselves a favorite target and a treasure trove for hackers.

OpenAI Data Breach and The Hidden Risks of AI Companies

More About the OpenAI Data Breach

Corresponding to an exclusive report made public by the New York Times mentioning that early last year, OpenAI suffered a cyber-attack for which the company did not inform law enforcement agencies that it had suffered an attack.

Two OpenAI insiders who remained anonymous reported that the hacker gained access to OpenAI’s internal forums, where employees discuss projects. OpenAI chose not to make this news public or inform any law enforcement agencies, albeit at the time of the attack, because, according to the company, none of the systems where the company houses and builds its artificial intelligence were breached.

Exposed Data

According to Reuter's report, the ChatGPT data leak contains secrets and details about the design of AI technologies. The hacker also lifted the details where employees talk about OpenAI’s latest technologies and developments in next-gen AI. However, the hacker was unable to get into the central systems where the company houses and builds its AI that is used for ChatGPT.

Impact of the OpenAI Data Breach

The attack was a severe concern for OpenAI employees. Their initial thoughts led them to think that this vulnerability could be exploited further and lead to state-sponsored hacks in the future. This tool, which is mostly a work and research tool as of now, if it fell into the wrong hands, could be used for nefarious purposes, endangering national security. Furthermore, the OpenAI executive's behavior and treatment of this incident exposed fractures inside the company and made employees question how seriously OpenAI considers security to protect its proprietary technology from foreign adversaries.

A former OpenAI technical manager, Leopold Aschenbrenner, was fired when he brought up these security concerns. He argued that the company is not doing enough to prevent foreign adversaries from stealing its secrets, telling in a podcast session to Dwarkesh Patel.

Company’s Response

Open AI executives only revealed the attack to employees during an all-staff meeting at the company’s San Francisco offices last year in April. The Microsoft-backed company decided not to make this news public since no customer or partner information was stolen. Further, OpenAI executives did not think of this as a national security threat, considering that the hacker was merely a private individual with no visible ties to any foreign government or parties.

An OpenAI spokesperson told TechRepublic via an email: “As we shared with our Board and employees last year, we identified and fixed the underlying issue and continue to invest in security”. The company has launched a bug-bounty program to put a blanket on this fiasco in the name of developing safe and advanced AI to scale up the security and trustworthiness of its platform. They are offering “$200 for low-severity findings to up to $20,000 for exceptional discoveries.”

What is OpenAI and ChatGPT

The most buzzing and hot topics of the last two years are ChatGPT and OpenAI. Let us dive right into them and understand the perplexity of both. OpenAI is a company that was founded in 2015 with a mission to make Artificial General Intelligence (AGI) beneficial for humanity. Since its foundation, the company has been researching and making AGI safe while driving its broad adoption across the AI community.

OpenAI is the driving force of AI research and published most of its work to foster knowledge-sharing and collaboration within the worldwide AI community.

Some of the popular AI products and services are:

  • GPT (Generative Pre-trained Transformer) models: The company offers an extensive set of language models that have revolutionized natural language processing tasks.
  • OpenAI Codex: A system that translates natural language to programming code generation, helping developers in automation, suggestion and complete code snippet generation.
  • DALL·E: AI system that translates natural language descriptions into realistic images and art creation.
  • Dactyl: A human-like robotic hand to manipulate physical objects with high accuracy and dexterity.
  • Sora: A text-to-video AI model that can generate a video of up to one minute long, simulating the physical world in motion.

OpenAI is a parent company of many AI products that are revolutionizing today's landscape using natural language commands.

ChatGPT is the most popular product of OpenAI that has taken Generative AI to a whole new level. This state-of-the-art chatbot is designed to generate human-like text-based outputs on broad parameters of the input. People today are using ChatGPT to brainstorm ideas, do various kinds of writing and create content based on textual, graphical or voice-based prompts. This whole process is done in a natural conversational fluidity that feels like they are talking to a physical person.

ChatGPT is a result of years of extensive, high-quality training data on a vast amount of text, making it one of the most sophisticated AI conversational models available today.

The top characteristics of ChatGPT are:

  • Conversational Fluidity: ChatGPT can interact, respond to and act upon the instructions of the user as if they are talking to a human being. It can also understand the language nuances such as cultural references, jokes, sarcasm and ironic responses, all in an appropriate manner.
  • Large Vocabulary: ChatGPT is trained on a vast amount of text databases from the internet weighing approximately 570 GB of data. With this much data exposition, ChatGPT boasts a large vocabulary recognizing terms, phrases, and even uncommon technical words.
  • Multilingual Awareness: ChatGPT is a versatile tool that can interpret human requests in different languages. OpenAI states that ChatGPT can understand human prompts in up to 50+ languages.
  • Context and Customization: ChatGPT considers previous messages, records patterns, learns conversations better, adapts itself, and predicts answers to questions that are more contextually sound. Moreover, the users can fine-tune ChatGPT according to specific applications, industry and use cases to get more customized responses.
  • Moderation and Safety: OpenAI has fine-tuned and gated ChatGPT to produce safe, unbiased, unharmful and appropriate replies.

In conclusion, ChatGPT is the pinnacle of conversational AI, while OpenAI is the powerhouse behind such innovations and the umbrella of a plethora of other AI tools.

ChatGPT Data Breach in 2023

It is not the first time OpenAI has experienced a data breach. As reported by OpenAI, the company had a data leak on March 20, 2023. The ChatGPT breach lasted for approximately a nine-hour window, resulting in the exposure of approximately 1.2% of ChatGPT Plus users’ data. The leaked data included names, chat histories, email addresses and payment information such as the last 4 digits of the preferred credit card and expiry date.

How much is 1.2% in Numbers?

At the time of the ChatGPT breach in 2023, OpenAI had approximately 100 million subscribers. If we factor the ChatGPT Plus users out of the figure, there could be around 1.2 million users that were affected by the breach, realistically speaking. Even if OpenAI is scaling down the impact of this issue, 1.2 million users are still a relatively very large number for this scale of data breach.

Mitigation Efforts

Since the bug surfaced, OpenAI took the ChatGPT offline for approximately 24 hours to patch and fix the issue. The company reportedly reached out to the Redis maintainers to patch the vulnerability.

Company’s Response

The company’s explanation about the OpenAI breach looks somewhat confusing and downscaling. However, according to an official statement, the company said that they have extensively tested and fixed the underlying bug to increase the robustness of ChatGPT.

The Cyber Security Risks of ChatGPT breach can also be imagined such that the CEO of OpenAI, Sam Altman, came out of the closet and made a post about this breach publicly on a social media platform X. The text of the post goes by:

we had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. a small percentage of users were able to see the titles of other users’ conversation history. we feel awful about this.

The importance of data security in the context of AI

Data security for AI companies should be regarded as of paramount importance because they hold colossal amounts of data. Data security is not only about safeguarding information, but instead, it is also about maintaining trust and preserving the privacy and integrity of AI decision-making.

To understand the implied risk of AI in Cybersecurity, we have to understand the data types these AI companies are using to train their models:

Type

Description

Importance

High-Quality Training Data

Valuable datasets act as a foundation used for developing sophisticated AI models.

Protecting these assets is crucial for AI companies.

Bulk User Interactions

Analyzing user interactions based on billions of conversations provides deep insights into user’s behavior and preferences.

Securing user data is essential for maintaining trust, privacy and integrity.

Customer Data

Businesses use AI tools with sensitive data that is highly valuable and vulnerable.

Protecting this data is vital for both AI companies and their clients.

 

Apart from data types, following are the data-based transactions and data collection touch points in the context of AI:

  1. Healthcare: This sector deals with highly sensitive patent data. AI systems operating in healthcare institutions must comply with stringent regulations, ensuring the confidentiality and safety of patient records.
  2. Banking & Finance: AI systems operating in financial institutions deal with extensive financial data, transaction histories and customer spending patterns. Ensuring the safety of financial data is essential such that a breach can lead to significant data loss and losing customer trust.
  3. Automotive: This sector is increasingly relying on AI systems with the introduction of autonomous vehicles and smart cars. Safeguarding the AI systems both in autonomous vehicles and automotive industries is crucial. Unauthorized access can lead to accidents or misuse of vehicles.
  4. Manufacturing: AI in manufacturing due to digital transformation is on the rise for all sizes of industries. AI systems operating in these industries have access to unique protocols, critical infrastructure and intellectual property pertaining to the industry itself. Any breach can lead to sabotage and industrial espionage.
  5. Energy and Utility: AI systems utilized in the energy and utility sector help make automated decisions to manage power plants, water supply systems, and natural gas delivery. Any breach can lead to disruption of essential services.

In a rapidly evolving AI-based future, trust is an elusive asset. AI companies can naturally capitalize on their market footprint by safeguarding users’ data and providing a trustworthy feel to their users.

OpenAI MacOS Scandal

After a series of issues, OpenAI came under the spotlight again in July 2024. As reported by senior Swift developer (The programming language used in macOS app development ) Pedro José on threads, the ChatGPT app was found storing the user conversation in a plain text format in a non-protected location. OpenAI distributed its app via its website, bypassing Apple’s app sandboxing feature.

The Verge reached out to OpenAI for a comment, and the company's spokesperson, Taya Christianson, replied, “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” further added, “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”

The Future of AI Security

AI is making a significant impact across all areas of our lives. How people work, socialize and interact with each other and machines are all being rapidly revolutionized with the induction of AI. As this consumption increases, specialized and expedited actions must be taken to safeguard not only customer data but also protect AI models and algorithms from abuse, trolling and nefarious purposes. While traditional security practices remain of utmost importance, AI companies need to evolve towards the security challenges provided by the AI/ML landscape.

To address the new challenges, companies must develop AI-focused penetration testing, security review bodies, tools and frameworks for securing AI-based products and services. Efforts should be made to ensure AI is unbiased and should have the built-in ability for transparency and accountability. Furthermore, AI itself and AI companies should be resilient against various types of attacks, such as dataset tampering and social engineering.

Companies should also invest in AI-powered cybersecurity solutions to mitigate and thwart cyber-attacks. Sangfor provides premium solutions to ensure that the best safety measures are maintained. The Sangfor Next Generation Firewall (NGFW) is used to identify malicious files at both the network level and endpoints. Additionally, Sangfor’s advanced Endpoint Secure technology provides integrated protection against malware infections and APT breaches across your entire organization's network. Sangfor’s Cyber Command platform uses an enhanced AI algorithm to monitor for malware, residual security events, and future potential compromises in your network.

In conclusion, securing AI's future requires industry collaboration, alignment, and a proactive approach to address emerging threats and safeguard AI's use.

 

Contact Us for Business Inquiry

Listen To This Post

Search

Get in Touch

Get in Touch with Sangfor Team for Business Inquiry

Related Articles

Cyber Security

UN and WHO Warn of Ransomware Healthcare Crisis Becoming a Global Threat

Date : 18 Nov 2024
Read Now
Cyber Security

Election Security: Cyber Fraud Through AI, Deep Fakes, and Social Engineering

Date : 13 Nov 2024
Read Now
Cyber Security

Critical SonicWall & Fortinet Vulnerabilities (CVE-2024-23113 & CVE-2024-47575) Threaten Organizations Globally

Date : 13 Nov 2024
Read Now

See Other Product

Sangfor Omni-Command
Replace your Enterprise NGAV with Sangfor Endpoint Secure
Cyber Command - NDR Platform
Endpoint Secure
Internet Access Gateway (IAG)
Sangfor Network Secure - Next Generation Firewall