Email Interview
Interview by: YourStory https://yourstory.com/
Interviewee: Chetan Jain | Managing Director
Ans: AI is leveraged to enhance defenses by cybersecurity teams. Unfortunately, the same technology is also used to create sophisticated attacks benefiting cybercriminals, and transforming the threat landscape rapidly. Cybersecurity threats emerging through AI are many. The Autonomous AI Attacks are AI agents that identify vulnerabilities, plan, and execute an attack, all on their own without human intervention. Within minutes, a malicious AI can identify outdated software and exploit it across several thousand websites. Prompt Injection tricks AI systems into not heeding their safety rules, leading to the revealing of sensitive information. AI-Powered Malware tools, such as FraudGPT, enable cyber attackers to create malware that transforms its appearance every time, and becomes a challenge. Deepfake Scams are highly realistic AI-generated voices and videos used to impersonate CEOs or government officials. In Smarter Phishing, AI writes highly convincing personalized phishing messages where an email referencing a recent purchase is received by a victim rather than a generic bank alert. In Shadow AI, the AI tools are used by employees within organizations without actual approval. Sensitive customer data can be fed into an unapproved AI tool, leading to compliance and data privacy risks. AI-Enhanced Hacking can speed up reverse engineering, crack passwords, or identify vulnerabilities at unprecedented speed. In Stealth Attacks, AI can help attackers operate without files by hiding in legitimate tools.
Ans: AI-powered threats are not entirely new, but they change the speed, scale, and sophistication of traditional threats. Cyberthreats such as malware, social engineering, and phishing have existed for decades, and AI technology enables anyone to deploy these threats with ease. Cybercriminals can generate them rapidly, while cybersecurity teams struggle to detect them in time. Earlier, a phishing campaign could have taken several days to get designed and tested, with grammatical mistakes while formatting often exposing them. Today, by leveraging AI technology, threat actors can generate the same campaign in minutes, personalized for each recipient (victim), and make it look like a legitimate message.
There are some of the evolved threats driven by AI capabilities, such as Prompt Injection Attacks and Autonomous AI Attacks.
Ans: AI governance refers to a set of principles, processes, standards, policies, practices, and tools that support the management of AI usage within organizations in the most responsible way. This framework serves as a guideline to reduce potential risks of AI by developing, deploying, and monitoring the technology, ensuring they align with the original business goals as well as meet regulatory compliance. Regulatory and industry bodies such as the EU Act, NIST AI Risk Management Framework, and ISO/IEC 42001 introduced these guidelines for the responsible use of AI. Furthermore, every organization has to translate these industry guidelines into internal governance structures customized to its context. This can include defining AI usage policies, creating model risk assessment checklists, setting up bias detection and explainability reviews, and ensuring incident response for AI-related failures.
Ans: Organizations must have a good understanding of the various tactics attackers use to strengthen their skills that lead to data breaches. In addition to traditional cybersecurity controls, organizations have to implement AI-specific security measures. Super-strong passwords are a must, and they should not be reused across other documents. Software should be updated and vulnerabilities patched. Multi-factor authentication provides an additional layer of protection against unauthorized access. AI-driven security solutions, such as AI-powered endpoint security and AI-powered Security Information and Event Management (SIEM), among others, enhance the effectiveness of cybersecurity tools and have to be leveraged extensively. AI-monitoring to identify unusual behavior, model hardening to safeguard AI systems from attacks such as data poisoning, AI red teaming, where teams simulate adversarial attacks on AI systems to spot vulnerabilities enhance protection from AI cybersecurity threats. GenAI can help create realistic simulations of cyberattacks, enabling security teams to test their cybersecurity defenses and incident response measures. Employee training and awareness have to be enhanced to establish a vigilant workforce. Businesses have to invest in R&D to continuously strengthen their cybersecurity solutions in today’s rapidly evolving threat landscape.
Ans: The government’s role in policies and regulations in AI is to ensure AI technologies are built and leveraged responsibly and ethically. Organizations should be required to disclose the use of AI, especially in decision-making processes that have an impact on individuals. Transparency and accountability should be mandatory in the explainability of AI outputs, traceability of training data, and clear lines of responsibility in the case of AI failures. As AI advances further, transparency and accountability should continue to remain top priorities for policymakers.
Clear AI security standards should be established, where minimum security requirements for AI systems are well defined.
To ensure all AI policies are well-rounded and address different concerns, it is important to engage with different stakeholders, the public, corporates, as well as academic institutions. The government should establish AI security centers that can collaborate with the industry to share best practices and threat intelligence, and rapid-response capabilities for AI-driven cyber incidents.
The AI Advantage: Enhancing Cyber Resilience in Healthcare
By: Pritam Shah, Global Practice Head – OT Security and Data Security, Inspira Enterprise