Cyber Security Articles

A Q&A with Todd Olson, CEO of Pendo 1. What is a rage prompt? Can you share an example? A
Major advances in technology bring about fevered hype, hopes and dreams, venture capital… and dread from insurance carriers. How will
Dear Readers: The Journal of Cyber Policy is taking a break from publication as of July, 2024. We're not sure
The advancement of technology has led to an increase in cybercrimes. Hackers employ diverse techniques to identify areas of weakness
Tabiri Analytics, which offers cybersecurity monitoring services, is pioneering an innovative approach to outsourcing that addresses the global cybersecurity talent
By Alan Bavosa, VP of Security Products at Appdome The mobile app economy will continue to expand at an increasing
A new study conducted by mobile and telecommunication experts NetworkBuildz has determined the states most at risk of cyber-attacks in
By Christoph Nagy, SecurityBridge As we know, SAP (Systems, Applications, and Products in Data Processing) is a widely used enterprise resource
By James Allman-Talbot, Head of Incident Response & Threat Intelligence, Quorum Cyber   There are few things scarier than having
The UK’s National Cyber Security Centre (NCSC) just published The Guidelines for Secure AI System Development, which sets out some basic

Cyber Security Articles

What This Cyber Security Articles Page Is About

The goal of Journal of Cyber Policy is to provide commentary and stimulate conversations about important cyber security topics. Our parallel goal is to discuss cyber issues in plain English, liberating this critical subject from the exclusive realm of specialized engineers and hackers. Throughout, we try to talk about cyber security and related issues from the perspectives of public policy, national security, corporate policy and compliance.

 

Why Articles about Cyber Security Matter

We are living in an era where digital technology dominates so much of our lives. Digital risk naturally accompanies this reality. Smartphones, the IoT, the Internet and so forth make our lives easier, but they also expose us to threats. Some of these threats come from nation state actors. We believe Americans could be better-informed about these risks. And, while there’s certainly no lack of content online about cyberthreats, room still exists for cyber security articles that integrate the subject’s diverse themes of technology, politics and business.

For example, Russian disinformation and Chinese espionage are not new, but today’s digital landscape makes these familiar tactics deadly, in political terms. The Cold War was largely analog in nature, with offensive campaigns quite limited in scope and impact. While Cold War dynamics may survive today, they are having a radically different effect on American society and politics than anything that came before.

It can be tricky to tease out the differences between today and a generation ago. American politics and governance have always been messy, dishonest and idiotic, but there were at least some fact-based controls on it. This is no longer the case. Our enemies are exploiting this new reality. In some cases, they’ve created this new reality.

We see the impacts of these new measures, but leaders across the government and business sectors generally fail to understand the transformative nature of technology, e.g. Amazon is not just a bigger mail order store; the iPhone is not just a phone with fancy features, and so forth. These cognitive gaps lead to deficiencies in the perception of risk. They enable our leaders to underestimate our enemies and how they can win without firing a shot. We also tend to overestimate our defenses and resiliency.

The digitization of society, commerce and politics renders America defenseless in ways that we are only beginning to understand. Digital transformation is double edge. America’s rush to digitize its economy and society produces as much risk as it does benefits. For example, we have to manage the tensions between mobility and surveillance, between big data and privacy and so on.

The Topics We Cover in These Articles

We deal with a wide range of cyber security topics in these articles. Some discuss cyber election interference. Others look at geopolitical cyber risks, such as our recent series on Russian disinformation and “Active Measures.” We will frequently check in on the state of enterprise architecture and cloud computing, seeking expert insights into the best practices and new security technologies that are influencing security policies in these areas of information technology. We cover the gamut of security subjects: malware, phishing, identity and access management (IAM), privileged access management (PAM), zero trust, data security, application security, secure DevOps (DevSecOps), red-blue teaming, automation, Security Orchestration, Automation and Response (SOAR), threat monitoring, incident response, intrusion detection, encrypting, key management and on and on. Our cyber security articles look at compliance, government cybersecurity frames like NIST NSF, GDPR, CCPA and more.

Rage Against the Machine Learning

A Q&A with Todd Olson, CEO of Pendo


1. What is a rage prompt? Can you share an example?

A rage prompt is the generative AI version of a rage click. Just like rage clicks happen when users repeatedly click an unresponsive or misleading button, rage prompts occur when users do not get the response they expect from an agent, and continue promptings out of frustration.

For example, a customer support agent might type, “Show all open tickets for this customer.” If the result is off, they might try again with, “Give me recent issues for this client,” or, “What support cases are active?” Each variation reflects increasing frustration. These moments are more than failed interactions. They are behavioral signals that can guide agent builders to make improvements.

2. What drives users to create rage prompts? Do they arise from deficiencies in AI agents?

Often they do, but not always because of technical flaws in the AI itself. Rage prompts typically result from a lack of clarity, context, or guidance. The AI might not understand the user’s intent, or the user may not know what the AI is capable of doing.

Sometimes this is due to limitations in training data. Other times it is caused by poor interface design or missing context. Either way, rage prompts are a clear signal that something in the experience is not working as expected.

3. What could make AI agents better and less prone to rage prompts?

Improving agent performance starts with observing how users interact with agents and identifying where friction occurs. The best way to reduce rage prompts is by giving agents context, clear instructions, and constant feedback.

At Pendo, we developed a feature called Agent Analytics, the first solution to measure AI agent performance. It gives product and IT leaders visibility into patterns like repeated prompts or abandoned sessions, and it supports data-informed decisions about where and how to improve the experience.

This is less about upgrading the AI and more about improving the system around it.

4. Do rage prompts suggest that a company has misused or overused agentic AI to the detriment of customer or user experience?

Not necessarily. Rage prompts are a normal part of learning how people interact with AI. They are useful indicators of friction and can help improve product quality over time, with one customer noting that he finally understands what users want by observing the questions they ask the agent.

The concern arises when companies fail to track these signals or treat them as noise. That can lead to compounding frustration and missed opportunities. When teams pay attention to where rage prompts occur, they gain insight into both agent performance and user expectations.

5. What is the solution?

Teams need a structured way to capture and respond to these signals. That means instrumenting the experience to observe user behavior, analyzing where issues occur, and making targeted improvements based on those insights.

This is not just about tuning the AI model. It also involves improving user onboarding for the agent, providing clear guidelines on what the agent can and can’t help with, and collecting feedback after each interaction. Organizations that can connect behavior with action will be best equipped to improve their AI experiences over time.

6. Does agentic AI need to incorporate elements of artificial emotion (AE)?

Not in the way humans experience emotion, but yes, AI should be able to recognize and respond to emotional cues. That includes detecting patterns like repeated inputs, extended pauses, or erratic interactions that indicate frustration.

These cues can inform adaptive responses. For example, an agent might offer clarification, simplify its reply, or suggest alternate actions. The goal is not to replicate emotion, but to create a more responsive experience based on behavioral context.

7. What is Pendo’s solution?

Pendo helps companies understand how users interact with software, including AI agents, and take action to improve the experience. We capture every user interaction, from clicks and swipes to prompts and conversations to survey and poll responses, and synthesize the data to help teams understand where users are getting stuck on a task or workflow and take action to improve it. 

Agent Analytics supports this by allowing teams to view those signals in context and take informed action. Whether the issue is with an AI prompt, a feature rollout, or a complex workflow, the goal is the same: to improve the overall software experience based on real user behavior.

As AI becomes more embedded in software, this type of insight will be critical. It is not enough to build powerful tools. They have to be intuitive, effective, and aligned with what users actually need.

#

AI and Insurance: A Conversation with Claire Davey, SVP, Product Innovation and Emerging Risk at Relm Insurance

Major advances in technology bring about fevered hype, hopes and dreams, venture capital… and dread from insurance carriers. How will we insure against loss from technologies that no one really understands? Cyber risk is a recent example. Now, the insurance industry is scratching its collective head about artificial intelligence (AI). I spoke about this topic with Claire Davey SVP, Product Innovation and Emerging Risk at Relm Insurance. Insuring the unknown is what Claire does all day, so she has a distinct and well-informed perspective on the issue.

Claire Davey SVP, Product Innovation and Emerging Risk at Relm Insurance

According to Claire, AI is moving faster than regulation, and traditional insurers can’t keep up. From algorithmic bias to machine-driven cyber threats, businesses deploying AI face risks no legacy policy can cover. Potentially problematic scenarios abound, but high on the list are risks like an AI chatbot using biased, offensive language, AI software inadvertently ingesting private data and making it part of a general AI algorithm, generative AI (GenAI) engaging in “hallucinations” that create liability, and on and on. As adoption accelerates and compliance tightens, insurance becomes the missing bridge between innovation and investment.

Here are some of Claire’s thoughts on the matter:

Q:          AI is moving faster than regulation, and traditional insurers can’t keep up. Can you expand on that? Where are you seeing the biggest gaps today?

A:           In the EU, there is some clarity regarding regulation and the framework provided. The most concerning complexity arises in the US, where there are separate regulations per state, and industry bodies are seeking to implement their own requirements and frameworks. This makes it confusing and unpredictable for businesses using or developing AI, but it also increases the risk exposure for insurers. Without a consistent regulatory baseline, insurers are forced to navigate a moving target, which makes pricing, reserving, and even determining what is or isn’t covered far more challenging than in traditional lines.

Q:          From an underwriting perspective, how does AI change the very nature of risk compared to traditional cyber or liability exposures?

A:           Additional regulatory exposure is one factor. Another is the lack of mature governance procedures and frameworks within many organizations, which leads to oversight gaps.

On top of that, AI’s ability to develop and produce outcomes in ways that were unintended adds a layer of unpredictability that insurers have to account for.

However, I think there are generally a lot of similarities with how cyber risk emerged as an insurable risk. At first, it seemed quite unquantifiable, but over time, insurers developed the tools and data needed to model it. AI will likely follow a similar path, though the velocity of change is significantly faster.

Q:          Do you think the real challenge is that existing insurance products weren’t built to anticipate AI-driven risks, e.g., cyber policies not accounting for model poisoning or adversarial attacks?

A:           It is more the case that underwriting, pricing, and reserving models were not created with the understanding of exposures driven by AI. The issue is not only about coverage gaps but also about the actuarial foundation insurers rely on to set premiums and reserves; these models were not designed for the scale, speed, and novelty of AI-driven exposures.

Q:          When Relm says it “underwrites AI,” does that mean you’re focused on insuring companies building AI systems, companies adopting AI in their operations, or both?

A:           Both. We underwrite companies that are directly producing AI technologies as well as those that are adopting AI in their operations and therefore facing new risk exposures. For us, underwriting AI means recognizing that the risks can emerge anywhere in the value chain, from the developers building algorithms to the end users applying them in sensitive industries. Our role is to bridge that gap and provide coverage that adapts to both sides of the market.

Q:          Looking ahead, what does the insurance industry need to change, in structure, product design, or mindset, to truly adapt to the AI era?

A:           Monitor AI exposures and constantly tweak coverage, pricing, and reserving models. The pace of AI development means insurers can’t rely on static frameworks – they need to evolve coverage dynamically and work in closer collaboration with regulators, technologists, and businesses deploying AI. It’s also about adopting a mindset that sees insurance not just as a protective layer, but as an enabler of innovation, giving companies the confidence to deploy AI responsibly while knowing they have tailored risk transfer mechanisms in place.

Photo by Pixabay: https://www.pexels.com/photo/blue-bright-lights-373543/

We’re Taking a Break and Blasting Off into Orbit

Dear Readers:

The Journal of Cyber Policy is taking a break from publication as of July, 2024. We’re not sure when we’ll be back, but as of now, our focus is shifting to space security. Our project is the Space Piracy Blog.

Thank you for all your support over the last six years.

Hugh Taylor

 

Photo by SpaceX

An Entire Organization Can be Breached Just by Plugging in a Compromised Keyboard says Cyber Security Researcher Prathibha Muraleedhara

The advancement of technology has led to an increase in cybercrimes. Hackers employ diverse techniques to identify areas of weakness or vulnerabilities to infiltrate an organization’s network. The increase in remote work facilitated by organizations after the COVID-19 pandemic has led to a heightened risk from cybercriminals.

The messy cables of wired mice and keyboards have made them obsolete, and wireless peripheral devices are preferred as they offer a convenient, cable-free connection. However, unlike other USB devices like MFA authentication devices, memory card readers, fingerprint sensors, and USB storage devices, wireless keyboards and mice hardly include any security features. As a result, many of these peripheral devices are prone to security vulnerabilities which can lead to the complete compromise of the computers they are connected to and can be used to launch advanced attacks.

“Wireless peripheral devices like mice and keyboards use proprietary protocols operating in the 2.4GHz ISM band. Manufacturers of wireless mice and keyboards don’t adhere to the Bluetooth protocol, which has established industry-standard security schemas. Instead, they create their own security schemas, which often have vulnerabilities that can be exploited by malicious users” says Prathibha Muraleedhara.

Prathibha Muraleedhara is distinguished for her remarkable contributions to Fortune 500 companies like HP Inc., KPMG, and Stanley Black & Decker. She is a product security researcher and leader with over a decade of experience in protecting leading product-based manufacturing companies from cyber threats. She has made a significant impact by performing security architecture reviews and pentesting an extensive amount of industry-leading products. Her invaluable assistance in identifying critical security vulnerabilities and remedying them has contributed substantially to enhancing the security of these products.

Prathibha describes various techniques through which wireless devices can be exploited to launch advanced cyber-attacks in her scholarly article – “Wireless Peripheral Devices – Security Risk, Exploits and Remediation” published in the Cyber Defense Magazine. She explains how some manufacturers do not encrypt the wireless connection between the peripheral devices and the USB dongle which allows hackers to capture the transmitted radio frequency packets and decode the mouse clicks and keystrokes transmitted. Also, due to a lack of authentication, the USB dongle will not be able to differentiate if the packets were initiated by a legitimate peripheral device or by the attacker. She highlights that this will enable hackers to send malicious keystrokes and mouse clicks to the target computer and further launch carefully crafted advanced cyber-attacks.

In her article, Prathibha discusses various classes of vulnerabilities that affect peripheral devices like keyboards and mice. These vulnerabilities include sniffing the transmitted radio frequency packets using the Nordic Semiconductor nRF24L01+, Tempest attack which is spying on information systems by listening to electrical or radio signals, vibrations, sounds, and other leaking emanation, SATAn Air-Gap Exfiltration Attack, Far Field Electromagnetic Side-Channel Attack, and Key Sniffer exploits. Also, she describes the technical details of launching Bastille Research’s Mousejack attack in which a hacker can force-pair an illegitimate peripheral device and inject keystrokes as a spoofed mouse or keyboard.

Drawing on Prathibha’s professional experience and expertise, she recommends the manufacturers encrypt the transmitted radio frequency packets to prevent sniffing, eavesdropping, interception, and analyzing of the keystrokes transmitted. Also, this lets the wireless devices authenticate to the paired dongle preventing any rouge wireless device from connecting to the dongle and sending maliciously crafted keystrokes to the target computer.

Prathibha Muraleedhara

Overall, exploits like Mousejack, KeyJack, and electromagnetic side-channel attacks prove that wireless products even from trusted manufacturers may be vulnerable to serious security exploits. Before the pandemic, organizations were only concerned with ensuring the physical security of their onsite company locations. However, the threat landscape has expanded as the workforce transitions from traditional onsite spaces to remote home offices. Organizations must now take necessary precautions to confirm that the peripheral devices they have provided to their employees are not susceptible to these exploits. If updated firmware is available from the manufacturers, it must be pushed to all the devices. All vulnerable devices with no firmware updates must be discarded.

African Outsourcing As a Path to Solving the Global Cybersecurity Talent Shortage 

Tabiri Analytics, which offers cybersecurity monitoring services, is pioneering an innovative approach to outsourcing that addresses the global cybersecurity talent shortage. By staffing its monitoring operation with personnel from Rwanda trained through a partnership with Carnegie Mellon University and other schools, Tabiri Analytics is able to deliver economical monitoring solutions while fostering socio economic growth in underserved communities.

“Our approach is truly a win-win for all stakeholders,” said Savannah Kadima and Edwin K. Kairu, co-founders of Tabiri Analytics. “Our clients benefit from a deeper bench of talent, while also getting a more affordable monitoring service. At the same time, we are equipping the next generation of cybersecurity professionals in underserved emerging market regions. We offer African students exclusive, mentored IT training along with educational opportunities targeted to further the skill sets of students interested in productive careers. There is so much potential and enthusiasm from young adults who demonstrate the desire to learn and contribute to the cybersecurity industry.”

Their timing is good. The cybersecurity industry is facing an acute talent shortage, with hundreds of thousands of jobs going unfilled worldwide. As companies grapple with this serious challenge, the cyber threat environment grows more intense—creating a high risk scenario where attackers enjoy the advantage because cyber defenders cannot find the people they need to run cybersecurity operations. By working with a new talent pool in Africa, Tabiri Analytics has found a way to deliver a consistent, affordable monitoring service.

Tabiri Analytics can partner with existing in-house IT staff or a company’s outsourced IT services provider for the purpose of offloading the burden of continuously monitoring for cybersecurity issues.

Photo by Tima Miroshnichenko: https://www.pexels.com/photo/close-up-view-of-system-hacking-5380792/

Consumer Demand for Better Mobile App Security and Intensified Regulatory Scrutiny Create Need for Increased Cyber Resilience

By Alan Bavosa, VP of Security Products at Appdome

The mobile app economy will continue to expand at an increasing pace, as evidenced by consistent data from Appdome’s consumer surveys, spanning over 75,000 consumers across 12 countries in 2021, 2022, and 2023, that reveal a migration to the mobile app channel to buy, save, share and support the brands they love. Mobile app traffic is now the dominant channel for brand interaction. As the mobile security space transforms from a niche, specialty market to a mature industry, regulatory and compliance scrutiny are also heightened. With scrutiny and compliance requirements intensifying in regions like LATAM, APAC, the US and the UK – the cybersecurity industry will witness a surge in emphasis, and demand, for mobile security and mobile-centric hiring practices in 2024.

Consumer expectations for mobile app security are growing and so is regulatory and compliance scrutiny. This means that mobile brands and developers must accept that the onus of protecting global consumers from cyber threats – be it hacking, data theft, fraud, or malware – falls squarely on their shoulders. More directly stated: users do not want to own security and are holding brands accountable for the protection of all personal data, and beyond.

In the U.S. alone, according to Appdome’s 2023 consumer survey, an eye-opening 73% of consumers confessed they would drop an app quickly if they sensed even the slightest weakness in security – and will abandon brands that don’t seem to care about their security or protect them.

Mobile consumers are becoming more and more cyber-savvy and expect app makers to build comprehensive security into mobile apps, moving the baseline from basic cyber protections to comprehensive mobile app defense. In fact, the survey found that consumers expect mobile brands to go one step further by preventing fraud instead of detecting and reimbursing them after it occurs. A staggering 82% of mobile consumers said they preferred mobile brands to stop mobile fraud before it started. Only 15% said they prefer to be reimbursed after it happens, and only a negligible amount (about 2%) said fraud protection is not important to them.

When asked who should bear the responsibility for mobile app protection, the majority of global consumers (56%) said they expect the mobile brand or developer of the app to protect them.

To meet the growing demands of consumers and regulatory entities alike, cybersecurity teams must start adopting developer best practices to ensure not only compliance but also cyber resilience. Cyber resilience in mobile apps is the ability to withstand and recover from security incidents or attacks in real time. For the longest time, the thought has been that mobile app developers should adopt cybersecurity best practices.

The release cycle for developing or updating mobile apps is very rapid – and short – with the entire workflow, including every tool used within, being automated. Traditional mobile app security tools, however, are the exact opposite of this as they rely on manual effort or impose cumbersome operations, and do not fit into the DevOps workflow – at all. This leads to security being ignored altogether, or the implementation of “bare minimum” security measures, which still requires a large time and effort commitment by the development team.

Tools, such as those provided by Appdome, that give developers a way to implement comprehensive security in a way that fits right into their existing, automated workflow, without any work on their part, are crucial for effectively implementing cybersecurity best practices in the development cycle.

Put simply, the only way that cybersecurity is going to have a true seat at the table is when the industry starts to adopt DevOps best practices. Cybersecurity would thus have an agile and rapid way to build their security model to protect against new threats and attacks that they were able to identify in production.

As before, data from Appdome’s 2023 consumer survey revealed that mobile applications dominate the consumer share of mind and wallet. Additionally, consumers now ‘feel the pain’ and have begun to take any lack of protection in the mobile apps they use personally. Going further, they openly place the responsibility for mobile app defense on the mobile brand and developer providing the app.   Mobile brands are advised to listen to consumers’ biggest fears like hacking, fraud, and malware, and respond to the high cyber and anti-fraud expectations consumers have in using mobile apps for life and work.

A company’s mobile cyber defense culture should always protect the customer first. What is encouraging is that the reward for developers for protecting Android and iOS apps and users is better than ever – an overwhelming 93.6% of global consumers confirm a willingness to promote mobile apps and brands to others if they felt like mobile apps were protecting them, their data, and use. All the more reason to make mobile app protection a top priority.

Photo by Towfiqu barbhuiya: https://www.pexels.com/photo/close-up-of-a-smart-phone-with-a-lock-11391947/

New Study Reveals Top 10 US States at Highest Risk of Cybercrime in 2024

A new study conducted by mobile and telecommunication experts NetworkBuildz has determined the states most at risk of cyber-attacks in 2024. The research examined each annual FBI Internet Crime Report from 2018-2022 to collate the number of cyber-attacks in each state over the last five years, which were then compared by 100,000 people to reveal the states most susceptible to cybercrime.

 

 

 

 

 

 

The top 10 states most susceptible to cybercrime

 

 

 

 

 

Rank 

 

 

 

 

State

 

 

 

 

Total cyber-attacks over the last five years

 

 

 

 

Cyber-attacks per 100,000 people

 

 

 

 

 

1. 

 

 

 

 

Nevada

 

 

 

 

54,515

 

 

 

 

1,756

 

 

 

 

 

2. 

 

 

 

 

Alaska

 

 

 

 

8,453

 

 

 

 

1,153

 

 

 

 

 

3. 

 

 

 

 

Maryland

 

 

 

 

58,627

 

 

 

 

951

 

 

 

 

 

4. 

 

 

 

 

Colorado

 

 

 

 

53,562

 

 

 

 

928

 

 

 

 

 

5. 

 

 

 

 

Florida

 

 

 

 

193,602

 

 

 

 

899

 

 

 

 

 

6. 

 

 

 

 

Iowa

 

 

 

 

28,256

 

 

 

 

886

 

 

 

 

 

7. 

 

 

 

 

Washington

 

 

 

 

67,434

 

 

 

 

875

 

 

 

 

 

8. 

 

 

 

 

Delaware

 

 

 

 

8,648

 

 

 

 

874

 

 

 

 

 

9. 

 

 

 

 

California

 

 

 

 

316,565

 

 

 

 

801

 

 

 

 

 

10. 

 

 

 

 

Arizona

 

 

 

 

53,318

 

 

 

 

746

 

The study revealed that Nevada residents are at the highest risk of cybercrime in the country. Over the last five years, Nevada experienced its highest occurrence of cybercrimes in 2021, with a total of 17,706 cyber-attacks in one year alone. Overall, between 2018-2022, the state recorded 54,515 total cyber-attacks, which equates to 1,756 attacks per 100,000 people and correlates to a total victim loss of $320,052,803.

Alaska placed next in the ranking, with the second-highest number of cybercrimes over the last five years. The data reported a total of 8,453 reported cyber-attacks between 2018-2022, resulting in 1,153 cyber-attacks per 100,000 people. Additionally, victim losses amounted to $50,511,484 in Alaska between 2018-2022.

Maryland comes in third place on the list. The state saw 58,627 cyber-attacks between 2018-2022, which equates to 951 attacks per 100,000 people, with a victim loss of $479,475,435. Maryland experienced its highest occurrence of cybercrimes in 2020, with a total of 14,804 recorded crimes in one year alone.

The study ranks Colorado in fourth place, with 928 cyber-attacks per 100,000 people. Over the last five years, Colorado experienced 53,562 recorded cyber-attacks, which amounted to a total of $508,886,418 in victim losses in the state. In 2020, Colorado experienced its highest incidence of cybercrimes, with a total of 12,325 recorded crimes throughout the year.

The study places Florida in fifth place. Over the reports spanning from 2018 to 2022, Florida experienced significant cybercrime impact, with a total of 193,602 cyber-attacks. As such, this translates to 899 cybercrimes per 100,000 people.

Iowa ranks sixth, with 886 cyber-attacks per 100,000 reported over the last five years. When broken down, Iowa suffered 28,256 cybercrimes between 2018-2022, which resulted in a total state loss of $141,282,658.

Washington ranks in seventh place, with a total of 875 cyber-attacks per 100,000 people, and Delaware narrowly follows in eighth place, with 874. The top ten states most susceptible to cybercrime are rounded out with California (801) and Arizona (746) in ninth and tenth place, respectively.

A spokesperson for NetworkBuildz commented on the findings: “This study can provide a useful indication into which regions of the country are most at risk of cybercrime in 2024, with Nevada taking the top spot. These findings also stress the financial dangers of cybercrime, highlighting the importance of staying informed about the latest scams, best online practices, and necessary precautions to help reduce the risks of cybercrime.

“Seven of the top 10 states experienced the highest number of cybercrimes in 2020. As such, this suggests that 2020 was a particularly detrimental year for cyber-attacks nationwide, possibly due to the vulnerable state of the global landscape due to the Covid-19 pandemic.”

 

Source: Federal Bureau of Investigation’s annual Internet Crime Reports from the following years: 2018, 2019, 2020, 2021 and 2022. 

Methodology: The data combined the number of cyber-attacks reported over the last five years. With the aim of ensuring fairness, the data calculated the number of cyber-attacks victims per 100,000 people in each state to devise a ranking.

Full ranking: The states most susceptible to cybercrime 

 

 

 

 

 

 

Rank 

 

 

 

 

 

State 

 

 

 

 

 

Cyber-attacks per 100,000 people 

 

 

 

 

 

1 

 

 

 

 

Nevada

 

 

 

 

1,756

 

 

 

 

 

2 

 

 

 

 

Alaska

 

 

 

 

1,153

 

 

 

 

 

3 

 

 

 

 

Maryland

 

 

 

 

951

 

 

 

 

 

4 

 

 

 

 

Colorado

 

 

 

 

928

 

 

 

 

 

5 

 

 

 

 

Florida

 

 

 

 

899

 

 

 

 

 

6 

 

 

 

 

Iowa

 

 

 

 

886

 

 

 

 

 

7 

 

 

 

 

Washington

 

 

 

 

875

 

 

 

 

 

8 

 

 

 

 

Delaware

 

 

 

 

874

 

 

 

 

 

9 

 

 

 

 

California

 

 

 

 

801

 

 

 

 

 

10 

 

 

 

 

Arizona

 

 

 

 

746

 

 

 

 

 

11 

 

 

 

 

Indiana

 

 

 

 

741

 

 

 

 

 

12 

 

 

 

 

Virginia

 

 

 

 

740

 

 

 

 

 

13 

 

 

 

 

Oregon

 

 

 

 

652

 

 

 

 

 

14 

 

 

 

 

Wisconsin

 

 

 

 

642

 

 

 

 

 

15 

 

 

 

 

New York

 

 

 

 

635

 

 

 

 

 

16 

 

 

 

 

Connecticut

 

 

 

 

621

 

 

 

 

 

17 

 

 

 

 

Wyoming

 

 

 

 

617

 

 

 

 

 

18 

 

 

 

 

New Jersey

 

 

 

 

613

 

 

 

 

 

19 

 

 

 

 

Utah

 

 

 

 

606

 

 

 

 

 

20 

 

 

 

 

New Mexico

 

 

 

 

606

 

 

 

 

 

21 

 

 

 

 

Texas

 

 

 

 

587

 

 

 

 

 

22 

 

 

 

 

Missouri

 

 

 

 

585

 

 

 

 

 

23 

 

 

 

 

Massachusetts

 

 

 

 

585

 

 

 

 

 

24 

 

 

 

 

Illinois

 

 

 

 

573

 

 

 

 

 

25 

 

 

 

 

Pennsylvania

 

 

 

 

554

 

 

 

 

 

26 

 

 

 

 

Rhode Island

 

 

 

 

550

 

 

 

 

 

27 

 

 

 

 

Hawaii

 

 

 

 

535

 

 

 

 

 

28 

 

 

 

 

Kentucky

 

 

 

 

535

 

 

 

 

 

29 

 

 

 

 

South Carolina

 

 

 

 

533

 

 

 

 

 

30 

 

 

 

 

Georgia

 

 

 

 

530

 

 

 

 

 

31 

 

 

 

 

Michigan

 

 

 

 

524

 

 

 

 

 

32 

 

 

 

 

Ohio

 

 

 

 

523

 

 

 

 

 

33 

 

 

 

 

New Hampshire

 

 

 

 

518

 

 

 

 

 

34 

 

 

 

 

Vermont

 

 

 

 

514

 

 

 

 

 

35 

 

 

 

 

Montana

 

 

 

 

505

 

 

 

 

 

36 

 

 

 

 

Idaho

 

 

 

 

494

 

 

 

 

 

37 

 

 

 

 

Alabama

 

 

 

 

492

 

 

 

 

 

38 

 

 

 

 

Tennessee

 

 

 

 

492

 

 

 

 

 

39 

 

 

 

 

South Dakota

 

 

 

 

491

 

 

 

 

 

40 

 

 

 

 

Minnesota

 

 

 

 

477

 

 

 

 

 

41 

 

 

 

 

Oklahoma

 

 

 

 

470

 

 

 

 

 

42 

 

 

 

 

North Carolina

 

 

 

 

468

 

 

 

 

 

43 

 

 

 

 

Nebraska

 

 

 

 

463

 

 

 

 

 

44 

 

 

 

 

West Virginia

 

 

 

 

458

 

 

 

 

 

45 

 

 

 

 

Maine

 

 

 

 

457

 

 

 

 

 

46 

 

 

 

 

Arkansas

 

 

 

 

455

 

 

 

 

 

47 

 

 

 

 

Louisiana

 

 

 

 

449

 

 

 

 

 

48 

 

 

 

 

Kansas

 

 

 

 

429

 

 

 

 

 

49 

 

 

 

 

North Dakota

 

 

 

 

395

 

 

 

 

 

50 

 

 

 

 

Mississippi

 

 

 

 

345

Photo by cottonbro studio: https://www.pexels.com/photo/a-person-sitting-on-the-floor-with-vr-goggles-using-a-computer-8721342/

Top 10 Vulnerabilities in SAP

By Christoph Nagy, SecurityBridge

As we know, SAP (Systems, Applications, and Products in Data Processing) is a widely used enterprise resource planning (ERP) software suite that helps organizations manage various business operations. No digital system is secure by nature or by default – there will always be security challenges, and SAP is no exception.

In this article, we discuss the Top 10 vulnerabilities in SAP – how they affect the security of an SAP system, and finally, how to identify and manage them.

  1. Incomplete Patch Management:

Patching is one of the most significant tasks and security concerns in SAP. Patches, or “SAP Security Notes” (that are, in general, released every 2nd Tuesday of the month) often contain critical security fixes that address vulnerabilities. Failing to apply these patches promptly can leave systems vulnerable to known exploits, as cybercriminals often target systems with known vulnerabilities.

  1. Default Credentials:

One of the most prevalent SAP security issues is the use of default or weak passwords. SAP systems often come with default usernames and passwords that are well-known. If organizations do not change these defaults or enforce strong password policies, it becomes relatively easy for attackers to gain access or escalate privileges.

  1. Inadequate User Authorization controls:

Role-based access control (RBAC) is crucial in SAP systems, but many organizations struggle with proper role and authorization management, with poorly managed user access being a common issue. Organizations must implement robust role-based access controls (RBAC) to ensure that users have only the permissions necessary for their roles. In fact, failing to do so can lead to data breaches and unauthorized activities. Overly permissive roles or insufficient segregation of duties (SoD) can lead to unauthorized access and fraud – conversely, overly restrictive roles can hinder productivity.

  1. Unsecured Interfaces:

SAP systems often have multiple interfaces for communication, including RFC (Remote Function Call) and HTTP. Attackers can exploit inadequately secured interfaces to access and manipulate SAP data or move easily between SAP systems and compromise the entire landscape. There are several ways to secure the interfaces, for example, by avoiding passwords by configuring trust between systems or by using UCON functionality of SAP to lower the attack surface drastically. Another measure is to enable data encryption, as it is essential for protecting sensitive information both at rest and in transit: without proper encryption measures, data can be exposed to eavesdropping and theft.

  1. Inadequate Authentication:

Weak authentication mechanisms, such as: simple passwords and insufficient authorization checks, can result in unauthorized access and privilege escalation. Organizations should implement multi-factor authentication (MFA) and regularly review and update authentication policies. When it comes to SAP, enforcing Single Sign-on greatly reduces the attack surface and the password reset effort by the teams.

  1. Insecure Custom Code:

Again, no digital system is perfect and the custom-developed code within SAP environments can introduce security vulnerabilities. Organizations must enforce regular code reviews and security testing to identify and remediate issues in custom code.

  1. Poorly Managed Security Logs:

Many organizations still do not activate SAP Security Audit Log in their systems, which leaves a huge gap in terms of incident investigation. Proper logging and monitoring are essential for detecting and responding to security incidents. Inadequate or misconfigured logging can make it challenging to identify suspicious activities or breaches. Organizations need to establish robust monitoring and alerting systems to stay vigilant against potential threats.

  1. Configuration errors and leaving settings on insecure defaults:

Misconfigured SAP systems can expose sensitive data and functionality to unauthorized users. This includes incorrect or overly permissive settings / parameters for database and application servers, network configurations, SAP components like Message Server, RFC Gateway and the ICM, and user authorizations. Configuration errors are often the result of human oversight or lack of expertise. Hence, 4-eye principle must be applied wherever possible, while performing configurations.

  1. Lack of Security Awareness:

Employees and users can inadvertently introduce security risks through actions like social engineering or falling victim to phishing attacks; regular security training and awareness programs are essential to mitigate this risk.

  1. Obsolete and Unsupported Systems:

Running outdated or unsupported SAP systems, Operating systems, and Databases can be a significant security risk. These infrastructures are more likely to have known vulnerabilities that attackers can exploit. If an SAP system is decommissioned, proper steps must be taken to ensure that all users are locked out, and the data is deleted to prevent unwanted data usage; sometimes, even decommissioned systems may contain sensitive business data.

In conclusion, SAP security and proper configuration management are critical concerns for organizations due to the sensitive nature of the data managed within SAP systems and how business-critical they are. To mitigate these top 10 security issues, organizations should establish a comprehensive SAP security strategy that includes regular patch management, robust access controls, secure custom code development, and ongoing user training. Organizations should stay informed about the latest SAP security vulnerabilities and best practices to adapt their security measures accordingly. Addressing these security challenges is essential to safeguard the CIA triad (Confidentiality, Integrity, and Availability) of SAP systems and the information they contain.

Christoph Nagy

Christoph Nagy has 20 years of working experience within the SAP industry. He has utilized this knowledge as a founding member and CEO at SecurityBridge–a global SAP security provider, serving many of the world’s leading brands and now operating in the U.S. Through his efforts, the SecurityBridge Platform for SAP has become renowned as a strategic security solution for automated analysis of SAP security settings, and detection of cyber-attacks in real-time. Prior to SecurityBridge, Nagy applied his skills as a SAP technology consultant at Adidas and Audi.

Account compromised? Don’t panic—take these steps instead

By James Allman-Talbot, Head of Incident Response & Threat Intelligence, Quorum Cyber

 

There are few things scarier than having your account compromised. It doesn’t matter if it’s a corporate account or a personal one that’s fallen into the hands of a bad actor. The initial wave of confusion—Hey, why isn’t it letting me in, or I don’t remember making that change—quickly turns to dread as you realize what has actually happened: someone has gained access to your account and all the information in it, and has the power to act on your behalf, likely to a damaging degree.

Before that dread can turn into panic, take a breath. There are in fact things you can, and should, do in the event of a valid account compromise, and once you’ve taken a moment to collect yourself you should jump right on them. Panicking is bad, but you still don’t want to delay.

  • If you can still access the account, change your password—immediately. Don’t reuse a password utilized on other accounts, and don’t change it to some variation of the old one (adding an exclamation point to the end of the old password is probably the first thing the hacker would guess if they try to get in again).
  • If the account is one where you can see and edit active sessions: close all of them. Obviously, if you see a session that is active on your account from halfway across the world, that’s probably where the person is who is in your account, but geographical data can sometimes be spoofed so it’s best to shut down all sessions to be safe.
  • You also want to contact people who can help you lock down the account and undo any damage. If it’s a corporate account that was hacked, reach out to your IT and/or security department—if you have a data protection officer, they’re the best contact—and let them know what happened. They’ll direct you on the next steps and help you determine what data was accessed and actions taken by the attacker.
  • Alternatively, if it’s your own personal account, contacting customer support for the application, site, or service should be your next step. They should have the tools to help you ensure your account is secured and undo any actions that the account took that you did not authorize.
  • Two-factor authentication (2FA), where you have to enter a code sent to your email or phone via text, is your friend. If 2FA wasn’t enabled on the account before, do it now. It makes it more difficult for someone to gain access to your account even if they’ve managed to discover your password. Yes, we all feel that mild ping of annoyance when we have to toggle over to another app to get the code, but I promise you that dealing with a hacked account is far, far more irritating (and lasts a lot longer).
  • Similar to closing out active sessions on an account, check for suspicious activity that might point to how the account was compromised or what the person who broke in got up to. Unauthorized purchases, odd activity, or specific data accessed—figuring out what damage they did will help you undo as much of it as possible.
  • Use that same password for other accounts? Change your repeated passwords elsewhere, starting with the email address tied to that account; oftentimes, hackers don’t stop at one account, and the email address (which is usually the most reliable backup for regaining access after an account locks down) is usually their next stop. For any accounts you have to change this way, it’s a good idea to do all of the above steps as well to see if they already accessed those accounts without you realizing.
  • It’s best to use a fully unique password for every account (again, especially your email). We all have countless accounts that are secured by passwords, so use a password manager to help keep track of those passwords and generate strong, unique ones you don’t have to worry about forgetting.

Accounts are compromised all the time, and while it’s nearly impossible to guarantee it’ll never happen to you, the above steps can limit the damage that is done when you’re hacked and help prevent it from happening again. Remember, if you notice weird activity on your account or start receiving authentication requests from 2FA-enabled accounts that you didn’t generate, that’s a sign that something is amiss and action should be taken quickly.

 

James Allman-Talbot is the Head of Incident Response and Threat Intelligence at Quorum Cyber. James has over 14 years of experience working in cybersecurity, and has worked in a variety of industries including aerospace and defense, law enforcement, and professional services. Over the years he has built and developed incident response and threat intelligence capabilities for government bodies and multinational organizations, and has worked closely with board level executives during incidents to advise on recovery and cyber risk management.

 

UK’s NCSC Publishes “The Guidelines for Secure AI System Development”

The UK’s National Cyber Security Centre (NCSC) just published The Guidelines for Secure AI System Development, which sets out some basic principles for protecting artificial intelligence (AI) and machine learning (ML) systems from cyberthreats. The material is thought-provoking and relevant, but one of the most impressive things about it is the sheer number of entities that contributed to it and endorsed it.

No fewer than 21 other international agencies and ministries, including the NSA, CISA, FBI, and cyber agencies from Israel, France, Poland, Japan, Italy, Germany and many others. Corporations ranging from Google to IBM, Amazon and Microsoft contributed as well, as did RAND Corporation, Stanford University and on and on. When so many organizations come together to share suggested best practices for security, it makes sense to listen.

NCSC CEO Lindy Cameron said, “These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

The Guidelines are intended to help developers make informed decisions about cybersecurity as they produce new AI systems. Some of what they have to say is pretty basic, but still important, e.g., make sure that infrastructure hosting AI systems are secure. The document does contain some useful insights about risks that are commonly known, however.

For example, the document discusses novel security vulnerabilities affecting AI systems. These include “data poisoning,” which involves injecting bad data into an AI or ML model to cause it to generate unintended outputs. This could be a serious issue when considered in light of existing controversies around AI, such as with prison sentences. Sentencing algorithms are already under attack for racial “machine bias.” That problem is based on use of real, if biased data. How much worse could it be if someone maliciously introduced false, highly biased data into the AI system? That’s the problem these guidelines seek to address.

The guidelines comprise four areas of practice:

  • Secure design—which suggests that system designers and developers understand the cyber risks they face and model threats as they embed security into the design.
  • Secure development—which relates to developers understanding the security of their software supply chains, along with documentation, and the management of assets and technical debt.
  • Secure deployment—which involves protecting infrastructure and models from compromise, threat or loss, and developing responsible releasing processes.
  • Secure operation and maintenance—which includes standard security operations processes like logging, monitoring, incident response, and threat sharing.

The recommendations are all great. The question, of course, is whether anyone will follow them. That remains to be seen, but if AI security is like any other branch of security, the answer is that follow through will be inconsistent. As is the case everywhere, resources are not unlimited. And, as the document points out, the pace of development may push security into a secondary position of importance. That would be a mistake, especially in use cases where serious consequences can flow from tainted AI, e.g., war fighting scenarios, medical decision making, criminal justice, and so forth.

Industry experts are weighing in on the Guidelines, with Anurag Gurtu, Chief Product Officer of StrikeReady, remarking, “The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence.”

Troy Batterberry, CEO and founder of EchoMark, advised caution. He said, “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”

We’ll have to see how well the AI and software industries deal with AI security. These Guidelines represent an important early step in the right direction, however. They are general in nature, offering a fair amount of well-trodden best practices that people don’t always follow. Yet, even if adherence is uneven, it’s essential that we be having these dialogues now. If AI is the future, then AI risk is part of that future. We need to be dealing with it now, and these Guidelines show a path forward.