From The Guardian: US air force denies running simulation in which AI drone ‘killed’ operator
The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.
An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.
Read full article: https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test?CMP=oth_b-aplnews_d-1
Federal agencies reported over 30,000 cyber incidents in FY22
Atlas VPN covered the recently published FISMA report by the United States Office of Management and Budget (OMB) for the fiscal year 2022.
The FISMA report published by the OMB provides information about the overall state of government information security, including challenges, progress, and incidents.
In the fiscal year 2022, agencies saw fewer cyber incidents overall, which decreased by around 6%.
There were 30,659 cyber incidents in FY 2022, according to the OMB’s annual FISMA report to Congress, down from 32,509 in 2021.
The Federal Information Security Modernization Act (FISMA) requires Federal agencies to develop, document, and implement agency-wide information security programs to protect sensitive government information and operations.
Agency officials, like chief information officers and inspector generals, conduct annual reviews of an agency’s information security program and submit those to the OMB.
The OMB gathers all those annual reviews and summarizes them in the FISMA report, which is then submitted to Congress.
These reports are publicly available on the Whitehouse.gov website.
Improper usage incidents were the most commonly reported by Federal agencies in FY 2022, with 10,467 total cases, a slight uptick from 10,123 in 2021.
Improper usage incidents result from violating the organization’s acceptable usage policies, like using work computers for personal matters.
In addition, agencies said email or phishing attacks increased slightly to more than 3,010 last year from 2,962 in 2021.
The most significant growth in incidents was seen in the loss or theft of equipment category.
Around one thousand computing or media devices were lost or stolen in 2021, while in 2022, the number climbed to 1,786 incidents.
The most common attack vector remains in the “unknown” category.
Major incidents on the decline
According to OMB, 93% of the incidents in 2022 were classified as “baseline” or “unsubstantiated or inconsequential event[s].”
Four of almost 31 thousand incidents reported by agencies in FY 2022 were classified as major.
Government bodies affected by the incidents included the Department of Education, the Department of Treasury, and the Department of Agriculture.
One incident remains classified.
In contrast, agencies encountered seven major incidents in FY 2021.
Overall, the incidents’ number and severity remain relatively similar in FY 2022 and 2021.
Read the full article: https://atlasvpn.com/blog/federal-agencies-reported-over-30-thousand-cyber-incidents-in-fy22
From Password Manager: 1 in 6 Security Experts Say There’s a “High-Level” Threat of AI Tools Being Used to Hack Passwords
Since the launch of sophisticated AI-driven tools such as ChatGPT and Google’s Bard, reports have emerged that indicate these tools could help hackers steal passwords and phish sensitive information even more effectively than before.
In order to learn how much of a threat this poses to the average American, in April, PasswordManager.com surveyed 1,000 cybersecurity professionals.
Key findings:
- 56% are concerned about hackers using AI-powered tools to steal passwords
- 52% say AI has made it easier for scammers to steal sensitive information
- 18% say AI phishing scams pose a “high-level” threat to both the average American individual user and company
Read full article: https://www.passwordmanager.com/1-in-6-security-experts-say-theres-a-high-level-threat-of-ai-tools-being-used-to-hack-passwords/
From Blackberry: SideWinder Uses Server-side Polymorphism to Attack Pakistan Government Officials — and Is Now Targeting Turkey
SideWinder Uses Server-side Polymorphism to Attack Pakistan Government Officials — and Is Now Targeting Turkey
The BlackBerry Threat Research and Intelligence team has been actively tracking and monitoring the SideWinder APT group, which has led to the discovery of their latest campaign targeting Pakistan government organizations.
In this campaign, the SideWinder advanced persistent threat (APT) group used a server-based polymorphism technique to deliver the next stage payload.
Read full article: https://blogs.blackberry.com/en/2023/05/sidewinder-uses-server-side-polymorphism-to-target-pakistan
From Clint Watts: Rinse and repeat: Iran accelerates its cyber influence operations worldwide
Clint Watts – General Manager, Digital Threat Analysis Center at Microsoft writes: “Rinse and repeat: Iran accelerates its cyber influence operations worldwide:
Iran continues to be a significant threat actor, and it is now supplementing its traditional cyberattacks with a new playbook, leveraging cyber-enabled influence operations (IO) to achieve its geopolitical aims.
Microsoft has detected these efforts rapidly accelerating since June 2022. We attributed 24 unique cyber-enabled influence operations to the Iranian government last year – including 17 from June to December – compared to just seven in 2021. We assess that most of Iran’s cyber-enabled influence operations are being run by Emennet Pasargad – which we track as Cotton Sandstorm (formerly NEPTUNIUM) – an Iranian state actor sanctioned by the US Treasury Department for their attempts to undermine the integrity of the 2020 US Presidential Elections.
Read full article: https://blogs.microsoft.com/on-the-issues/2023/05/02/dtac-iran-cyber-influence-operations-digital-threat/
From AP: Twitter changes stoke Russian, Chinese propaganda surge By DAVID KLEPPER yesterday
David Klepper reports:
WASHINGTON (AP) — Twitter accounts operated by authoritarian governments in Russia, China and Iran are benefiting from recent changes at the social media company, researchers said Monday, making it easier for them to attract new followers and broadcast propaganda and disinformation to a larger audience.
The platform is no longer labeling state-controlled media and propaganda agencies, and will no longer prohibit their content from being automatically promoted or recommended to users. Together, the two changes, both made in recent weeks, have supercharged the Kremlin’s ability to use the U.S.-based platform to spread lies and misleading claims about its invasion of Ukraine, U.S. politics and other topics.
Read full story: https://apnews.com/article/twitter-russia-china-elon-musk-ukraine-2eedeabf7d555dc1d0a68b3724cfdd55
Darktrace Publishes Data Regarding How Generative AI changes everything YOU know about email cyber attacks
Monday 3rd April 2023
In new data published today, Darktrace reveals that email security solutions, including native, cloud, and ‘static AI’ tools, take an average of thirteen days from an attack being launched on a victim to that attack being detected, leaving defenders vulnerable for almost two weeks if they rely solely on these tools.
In March 2023, Darktrace commissioned a global survey with Censuswide to 6,711 employees across the UK, US, France, Germany, Australia, and the Netherlands to gather third-party insights into human behavior around email, to better understand how employees globally react to potential security threats, their understanding of email security and the modern technologies that are being used as a tool to transform the threats against them.
Key findings (globally and US) indicate:
- 82% of global employees are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.
- The top three characteristics of communication that make employees think an email is a phishing attack are: being invited to click a link or open an attachment (68%), unknown sender or unexpected content (61%), and poor use of spelling and grammar (61%)
- Nearly 1 in 3 (30%) of global employees have fallen for a fraudulent email or text in the past
- 70% of global employees have noticed an increase in the frequency of scam emails and texts in the last 6 months
- 87% of global employees are concerned about the amount of personal information available about them online that could be used in phishing and other email scams
- Over a third of people have tried ChatGPT or other Gen AI chatbots (35%)
The email threat landscape today
Darktrace researchers observed a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT[1]. These novel social engineering attacks use sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length with no links or attachments. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale.
In addition, threat actors are rapidly exploiting the news cycle to profit from employee fear, urgency, or excitement. The latest iteration of this is the collapse of Silicon Valley Bank (SVB) and the resulting banking crisis, which has presented an opportunity for attackers to spoof highly sensitive communication, for example seeking to intercept legitimate communication instructing recipients to update bank details for payroll. 73% of employees working in financial services organizations have noticed an increase in the frequency of scam emails and texts in the last 6 months.
Innocent human error and insider threats remain an issue. Many of us (nearly 2 in 5) have sent an important email to the wrong recipient with a similar looking alias by mistake or due to autocomplete. This rises to over half (51%) in the financial services industry and 41% in the legal industry, adding another layer of security risk that isn’t malicious. A self-learning system can spot this error before the sensitive information is incorrectly shared.
What does the arms race for generative AI mean for email security?
Your CEO emails you to ask for information. It’s written in the exact language and tone of voice that they typically use. They even reference a personal anecdote or joke. Darktrace’s research shows that 61% of people look out for poor use of spelling and/or grammar as a sign that an email is fraudulent, but this email contains no mistakes. The spelling and grammar are perfect, it has personal information and it’s utterly convincing. But your CEO didn’t write it. It was crafted by generative AI, using basic information that a cyber-criminal pulled from social media profiles.
The emergence of ChatGPT has catapulted AI into the mainstream consciousness – 35% of people have already tried ChatGPT or other Gen AI chatbots for themselves – and with it, real concerns have emerged about its implications for cyber defence. 82% of global employees are concerned that hackers can use generative AI to create scam emails indistinguishable from genuine communications.
Emails from CEOs or other senior business leaders are the third highest type of email that employees are most likely to engage with, with over a quarter of respondents (26%) agreeing. Defenders are up against Generative AI attacks that are linguistically complex and entirely novel scams that use techniques and reference topics that we have never seen before. In a world of increasing AI-powered attacks, we can no longer put the onus on humans to determine the veracity of communications. This is now a job for artificial intelligence.
By understanding what’s normal, it can determine what doesn’t belong in a particular individual’s inbox. Email security systems get this wrong too often, with 79% of respondents saying that their company’s spam/security filters incorrectly stop important legitimate emails from getting to their inbox.
With a deep understanding of the organization, and how the individuals within it interact with their inbox, the AI can determine for every email whether it’s suspicious and should be actioned or if it’s legitimate and should remain untouched.
This approach can stop threats like:
- Phishing
- CEO Fraud
- Business Email Compromise (BEC)
- Invoice Fraud
- Phishing scams
- Data Theft
- Social Engineering
- Ransomware & Malware
- Supply Chain Attack:
- URL-based spear-phishing
- Account Takeover
- Human Error
- Ransomware & Malware
- Microsoft
- Insider Threat
Self-learning AI in email, unlike all other email security tools, is not trained on what ‘bad’ looks like but instead learns you and the normal patterns of life for each unique organization.
Social engineering – specifically malicious cyber campaigns delivered via email – remain the primary source of an organization’s vulnerability to attack. Popularised in the 1990s, email security has challenged cyber defenders for almost three decades. The aim is to lure victims into divulging confidential information through communication that exploits trust, blackmails or promises reward so that threat actors can get to the heart of critical systems.
Social engineering is a profitable business for hackers – according to estimates, around 3.4 billion phishing e-mails get delivered every day.
As organizations continue to rely on email as their primary collaboration and communication tool, email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against evolving email threats.
Widespread accessibility to generative AI tools, like ChatGPT, as well as the increasing sophistication of nation-state actors, means that email scams are more convincing than ever.
Humans can no longer rely on their intuition to stop hackers in their tracks; it’s time to arm organizations with an AI that knows them better than attackers do.
END
[1] Based on the average change in email attacks between January and February 2023 detected across Darktrace’s email deployments with control of outliers.
GEOPOLITICAL TENSIONS ENABLED INCREASED HACKTIVIST CYBER THREATS IN 2022
New report from FS-ISAC highlights opportunity for cyberattacks against public and private institutions
Reston, VA, March 21, 2023 – FS-ISAC, the member-driven, not-for-profit organization that advances cybersecurity and resilience in the global financial system, today announced the findings of its annual Global Intelligence Office report, Navigating Cyber 2023.
The latest report showcased the effect that Russia’s invasion of Ukraine had on the global cyber threat landscape, sparking a flood of ideologically driven “hacktivism” that continues to this day. Driven from both sides of the conflict, the threats have increased substantially within the financial services sector, particularly for institutions in countries that Russia considers hostile. These threats can come from hacktivist groups or directly from the nation-states themselves.
“Unfortunately, the growing involvement of non-state actors attacking on an ideological basis and the manipulation of information by malicious actors will continue to sow uncertainty across the landscape in actual and perceived security threats,” said Steven Silberstein, CEO of FS-ISAC. “The best tool available for financial institutions to combat this is intelligence sharing, allowing collaboration across the global industry and ensuring better cyber preparedness. Cyber threats often evolve faster than the tools we use to combat them, but our strength is in our community.”
The report also highlights that some of the more traditionally common cyber threats, such as DDoS attacks and ransomware, are becoming more sophisticated and the suite of tools at a malicious actor’s disposal continues to develop.
Looking ahead into 2023, some of the key drivers of change in the threat landscape include:
- A growing market for malware-as-a-service: As threat actors become specialized in specific aspects of the kill chain and offer their services in skills and code for sale, cyberattacks become easier to orchestrate, less attributable, and of lower risk. Supply chain threats proliferate as key software, authentication, technology, and cloud service providers are increasingly targeted.
- The accessibility of AI helping attackers, and defenders: The emergence of new AI-technology lowers the barrier for hacking, allowing threat actors to use tools like ChatGPT to design ever more convincing phishing lures. However, those same tools will be leveraged to strengthen defenses as well.
- Cryptocurrency offers a prime target for cyber criminals: Cryptocurrency and digital assets are becoming more integrated into global financial infrastructure, generating a complex regulatory environment for multinational firms. In addition, threat groups will continue to finance their operations using cryptocurrency, highlighting the need for better oversight and asset class protections.
“Cyber criminals are endlessly inventive, and aided by technological advances,” said Teresa Walsh, Global Head of Intelligence at FS-ISAC. “The emergence of new technologies and malware delivery tactics will require institutions to ensure they keep up with evolving cyber threats on a continuous basis and focus on resilience so they can keep operating no matter what happens.”
The threat landscape is rapidly changing, and organizations face key challenges of increasing regulation around the world, seismic shifts in the cyber insurance market, and cybersecurity talent shortages. Facing massive changes in their operational environment, the financial services sector must navigate pressures to reduce costs without compromising the ability to continuously evolve defenses and enhance operational resilience.
Methodology
The Navigating Cyber 2023 report is sourced from FS-ISAC’s thousands of member financial firms in 75 countries and further augmented by analysis by the Global Intelligence Office. Multiple streams of intelligence were leveraged for the curation of the round-up, which examined data from January 2022 to January 2023. The publicly accessible version of the report can be found here. The full report is only available to member financial institutions.
About FS-ISAC
FS-ISAC is the member-driven, not-for-profit organization that advances cybersecurity and resilience in the global financial system, protecting the financial institutions and the people they serve. Founded in 1999, the organization’s real-time information-sharing network amplifies the intelligence, knowledge, and practices of its members for the financial sector’s collective security and defenses. Member financial firms represent $100 trillion in assets in 75 countries.
Contacts for Media
media@fsisac.com
Policy Insight: The RESTRICT Act
The US Senate, with support from the White House, has introduced the RESTRICT Act, a piece of legislation that would, according to the White House, “empower the United States government to prevent certain foreign governments from exploiting technology services operating in the United States in a way that poses risks to Americans’ sensitive data and our national security.”
Policy Insight:
According to Kevin Bocek, VP Ecosystem and Community at Venafi:
“The recently introduced RESTRICT Act would establish new, broad powers for the US Government to target possible threats to national security, personal privacy, and competitive threats. This goes well beyond a TikTok ban. It could change everything, from the phones in our pockets, to who gets to use emerging AI. And it brings back memories of the Encryption Wars of the 1990s when governments sought to control encryption technologies that we take for granted with bans and backdoors.
We’re now at a serious point in time, where the technologies in our pockets, homes, streets, businesses, airports and beyond can be used as part of kinetic warfare. And the RESTRICT Act targets the issues that we must face in the West.
Governments are finally waking up to the fact that adversaries don’t just use missiles and tanks – but instead, they take advantage of modern-day technology, controlled by machines connecting to the Internet. The worrying reality is that this technology can be monitored and controlled. For example, cranes built in China that offload containers from ships can not only be monitored but also potentially hijacked to create chaos and damage. Likewise, technologies from generative AI, to the graphic cards that make machine learning happen, are available globally and can be abused by adversaries.
The potential impact of the RESTRICT Act isn’t just a ban on TikTok. It’s the opening to what’s likely to be a decades long technology Cold War. One where the machines and software they run – which powers economies and innovation – will become a battleground for governments looking to stop adversaries in the AI, always-connected, and cloud computing driven age.”
From Proofpoint: Don’t Answer That! Russia-Aligned TA499 Beleaguers Targets with Video Call Requests
Key Takeaways
- TA499, also known as Vovan and Lexus, is a Russia-aligned threat actor that has aggressively engaged in email campaigns since at least 2021.
- The threat actor’s campaigns attempt to convince high-profile North American and European government officials as well as CEOs of prominent companies and celebrities into participating in recorded phone calls or video chats.
- The calls are almost certainly a pro-Russia propaganda effort designed to create negative political content about those who have spoken out against Russian President Vladimir Putin and, in the last year, opposed Russia’s invasion of Ukraine.
- TA499 is not a threat to take lightly due to the damage such propaganda could have on the brand and public perception of those targeted as well as the perpetuation of disinformation.



