Should Chatbots Refuse to Give High Risk Advice?

Chatbots are becoming increasingly popular. ChatGPT, for example, has nearly a billion weekly users. These LLM based services are used for all kinds of things, including many things their initial developers never dreamed of: planning, brainstorming, writing, translation, companionship, functional play, humor, studying, reformatting text, creating code. People also ask chatbots for all kinds of advice and facts. Chatbots have become the goto answer engines for questions ranging from “What is the capital of Chad?” to “How long should I boil a lobster for?”

However, there is a problem with LLMs and that is that they “hallucinate.” The term hallucination is a bit of a misnomer, because what is actually happening has less to do with figments of the imagination and more to do with patterns and probabilities. What actually happens is that the LLM confabulates a response that fits the patterns of a valid response but is not factually accurate. This often happens due to the model lacking information on a topic, but it can happen even when the model does have the knowledge in its training data.

No, it is not this kind of hallucination…

Hallucinations are impossible to completely eliminate from large language models. They are as much a feature as a bug, because the ability to create false information is inseparable from the model’s ability to generate fiction and hypotheticals or engage in role playing. It’s the nature of LLMs as stochastic probability engines. The only real way to eliminate hallucinations is to have some sort of output pipeline that involves checking and verification of outputs. That’s not something that chatbots currently do.

This is well understood and documented, but that does not change the fact that hallucinations continue to slip past people and be believed. A number of high profile events have included false citations in scientific journals, fake caselaw presented in court and medical advice for diseases that don’t even exist. One of the problems here is that people tend to believe the results a computer gives them, because in the past computers have been reliable and deterministic.

Continue reading

An Underwriters Guide to Cyber Risk: Managing 3rd Party Risk – Part 3

Due to the length of this detailed topic, it will be broken into multiple parts. Previous portions here:

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1
An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 2

Technical Approvals in Cybersecurity: A Missing Pillar of Risk Management

In traditional industries, technical approval processes are a vital part of ensuring safety and reliability. For example, companies often pay to have their devices tested and approved by organizations like UL, which rigorously test products to ensure safety and reliability. Safety-critical devices—such as fire alarms, fire pumps, and safety doors—require approval before being used by insured parties, giving insurers the confidence that these devices will perform when needed.

Cybersecurity, however, lacks a similar robust system of technical approvals. Without an established process, standards in cybersecurity are often vague and difficult to enforce. For instance, many standards simply state that an organization must have a “firewall” or use “industry-standard encryption.” These requirements are difficult to enforce because they are vague—what exactly qualifies as an acceptable firewall, and who verifies it? There are many products that could meet these requirements on paper, but without an approval process, there is no consistent or provable standard of quality.

Technical approvals are ultimately an absolutely necessary step to establishing universally high standards. This is what will, eventually, end the problem of high levels of third party risk forever. It is an unavoidable part of standardizing risk management in technology and reign in losses. It will, unfortunately, be difficult to make great progress in cyber security until such time as a robust system of independent testing and approval is established. This will create the “ecosystem of trust” that is necessary to enforce security.

A Well-Established and Necessary Process
It is unusual that cybersecurity proceeds without technical approval, but this reflects an outdated mindset in IT, where buyers assume all risk without warranties or guarantees. Technical approval is a well-established process in many industries, providing independent verification that a product meets specific standards and ensuring accountability. It is no longer the 1980’s, and software and IT products are no longer specialty products or experimental, but this mentality still persists.

Continue reading

Why is Ransom Paid? Panic, Perverse Incentives and Bluffs. 

It is rarely in the best interest of the victim to pay ransom! Although the narrative often is “Because they have no choice” or “It is to protect people from the leak.” This is a complete myth, and it tends to be advanced by those who have paid ransom before, as a way of covering their terrible and avoidable behavior. Nobody owns this untrue narrative more than the insurance underwriters who normalized this behavior.

The problem with something like ransomware is that most companies are willing to pay ransom, and as long as this remains true it will be a persistent problem and only get worse.  Ransomware has become so entrenched and is so easy and cheap to pull off, it will not subside until it becomes substantially difficult to succeed in a ransomware attack and make money doing so.  Unfortunately, there have been no efforts to reduce ransom payments.

It is important to never forget exactly what is paid for, with money American companies pay
(Source)

When ransomware gangs lock down a system, they are frequently the first people the victims hear from and they will do their best to instill fear, create panic and make the situation seem much worse than it is.  They will often claim that they will soon delete the data or raise their price for restoration.  Paying for data restoration is never necessary, if even the most basic of precautions have been taken to back up data, but that is often not the cast and 80% of organizations facing ransomware do not have adequate backups.   The situation is common, though always avoidable, and at least half of ransom payments are motivated primarily by the need to release systems and have data returned, not to avoid it leaking.

In many cases, companies have felt it was more reliable or faster to pay ransom, and with gangs so skilled at instilling fear and manipulating American companies, it is not uncommon.  In some cases insurers have even insisted that victims pay ransom against their will.  HSB is one of the few that still does this, forcing victims to pay ransom even if they felt it was not necessary, simply because the insurance company felt it was cheaper or safer to do so. However, the practice has never gone away completely from most insurers. Because the claims staff frequently receive kickbacks, they will tell organizations they are best off paying, even when they are not.

Unfortunately, it is not cheaper or safer to do so, and this is especially true if you do have backed up data.  The restored data is 100% assured to be contaminated with malware and backdoors and the incident response will be far worse off. Paying ransom almost doubles the average cost of cleaning up an incident in the end.  It also dramatically increases the chances of future attacks.

Continue reading

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 2

Understanding and Preventing Zero Day And Other Software Supply Chain Attacks

This is the second post in the series, intended to help better understand how third party risks can be managed, and addressing the problem of misinformation from high raking sources. Because of the pervasiveness of the myth that third party risks are unmanageable, primarily due to the insistence by insurance executives that “Well I don’t understand it, and therefore it can’t be done.” But because of this toxic insistence, it is necessary to break things down and provide detailed supporting information.

In this post, we will look at zero days and unpatched vulnerabilities as a type of exposure to third party risk. Zero days are similar to supply chain attacks, and many of the same methods for controlling zero days apply to supply chain attacks as well. MOVEit is an example of a zero day attack, which caused massive damage to the US and global economy. It illustrates exactly how these attacks work.

In some ways, it was the kind of systemic attack that insurers are constantly complaining about. However, it also illustrates all the ways the damage could have been prevented. MOVEit was bad, but it was also tragic, because so much of the loss could have been prevented, if we had our act together on this.

Continue reading

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1

Due to the length of this detailed topic, it will be broken into multiple parts. One of the reasons this post is so long is the extreme entrenchment of incorrect views, and therefore, a need to provide detailed explanations of why they are wrong.

As written about earlier, Warren Buffet is one of the worst out there when it comes to spreading misinformation and unnecessary alarm about cyber security risks. He’s not the only one, however. There seems to be an incessant and rather insane cry of “Well, there are third party risks and they could be systemic. Lets throw our hands up in the air and say there is nothing we can do.”

Of course, this is not the case, in the finite and artificial world of cyber security, no risk is insurmountable and all can be understood. Third party risks come from the fact that so many organizations are dependent on various third parties, such as vendors and contractors. Even clients and customers can be a third party risk, because some organizations rely on a relatively limited number of clients.

In this video-accompanied post, I will do my best to provide detailed information to refute this dangerous and deeply entrenched idea.

Lets be clear on something, this is not new or unique to cyber:
There is nothing new or novel about this concept at all. Some policyholders have always been dependent on a limited number of vendors or service providers. Even in the years before cyber security, a major failing of the power grid, as happened in 2003 and 1977, can cause widespread loss across a large area. A single storm can impact a huge area, or a bad hurricane season can bring devastating storms to a large area. That’s what a systemic risk is.

However, in cyber security, all systemic risks can easily be detected ahead of time, if we care to look. They’re artificial, based on the relationships we choose to have and the artificial, man-made, engineered systems we use with the human-created, anthropogenic, artificial, man-made, ARTIFICIAL RISKS. And therefore finite and easy to understand. It’s always easy to know your risks, when they are in engineered systems you own, right?

Continue reading