Protecting The Future: Google and AI Security

It's fair to say AI is the future. We find more uses for it every day, and it touches every aspect of life. But with all technology, there are inevitable concerns. AI is still a relatively new advancement, so the general knowledge of it is still low. This inherently leads those less in the know to be hesitant about the tech.

In this blog, we will look at Gemini and the steps Google has taken to ensure it meets their existing high standards of data protection. We will also look at how Google intends to use AI to solve a decades-long cybersecurity issue.  

Gemini & Security

The main concern around Generative (Gen) AI is security.

With a system that learns as it is used, people often feel that the more they use it, the more it will learn about them. However, this isn't the case. 

In keeping with the rest of Google's products, Gemini conforms to the General Data Protection Regulation (GDPR) enforced by the EU, which regulates how businesses store and use your data. These regulations require obtained user consent, a high level of data security, and transparency in data processing.

Gemini doesn't store or use your data to learn; your data is yours. 

Gemini uses information from trusted testers who are given features to trial ahead of the full release. These groups are aware that Gemini is tracking their responses and are quite often given prompts to measure specific responses.

Google responds to general user prompts using numerous security-tuned foundation models and added capabilities such as multi-step reasoning, extensions, and grounding databases.

Google has big plans for AI, far beyond Gemini.

The Defenders Dilemma

One of those is their aim to address a long-term issue in cybersecurity, 'The Defenders Dilemma'.

The idea behind this dilemma is that attackers only need one successful threat to break through even the strongest defences, whereas defenders need to be wary at all times to protect themselves. Defender's resources are often static and discoverable, and mitigations are often manually developed.

Google believes that AI can reverse this dilemma, automating defence and putting the impetus back on attackers.

They plan to use AI to understand and manage the complexities that generate such much vulnerability in the digital domain, upskill all users to be competent defenders, and, in some cases, hope to shift AI's role from assistive to autonomous.  

They have branded these shifts as a move towards an 'Intelligent Digital Immune System.'

Google believes that, over time, AI will be able to merge and automate the feedback between a number of functions, including:

Threats: AI-driven threat intelligence systems that monitor attacker trends

Vulnerability Discovery: Systems would learn from attacker trends, driving the discovery of new vulnerabilities and misconfigurations

Code & Configuration System: Learning from these new vulnerabilities and misconfigurations, the system will update secure deployment and guardrails for both developers and administrators

Secure Code Generation: AI can propose new code and secure configurations to patch weaknesses

Automated Updates: AI can test and deploy new patches and configuration changes

Detection & Incident Response: Systems can learn from baseline endpoint telemetry and user behaviour to detect threats in the environment, summarise alerts and incidents for analysts and propose next steps

Continuous Monitoring: AI can continuously monitor system performance and control posture before making recommendations

These changes will not be implemented in the immediate future; the technology isn't quite up to the required level yet.

While nobody can predict how AI will evolve, the indication of how far it has come in recent times must give us hope that eventually, it will not just be an idea but a reality so real that we can't remember what life was like before it.

The Future of AI

Google is predicting a shift in cybersecurity. 

Those looking to get into cybersecurity will shift from learning the traditional set of required skills towards gaining the ability to integrate several AI systems and workflows to create self-healing networks. Google feels this is the key to getting AI to the required level.

To reach this level, they believe we need to start building AIs with strong secure-by-design fundamentals that run similarly. It must manage immense complexity while giving high-quality, reliable answers and have the ability to provide general security expertise that is transferable to new and unseen domains.

One of their keys is better programmability, meaning it should be already baked into solutions rather than a standalone or additional offering.

The future of AI is exciting. Nobody can say for certain where it could take us. But AI is already here, and it’s having a positive effect on the lives of every user who uses it. Google is leading the charge in AI, ensuring that it is best in class across the board. 

Security is no different, Google is ensuring Gemini is at the forefront of ensuring that Google Workspace & Cloud security maintains its strong position. But it’s not just Workspace & Cloud, they are the market leaders in ensuring that Cybersecurity continues to adapt to the ever-changing landscape.    

Previous
Previous

The Digital Divide: Bridging the Gap with Google Workspace & Gemini

Next
Next

Does Your Cloud Usage Link To Your Sustainability Goals?