A Risk-Oriented Hierarchy of Intervention in the Deployment and Customization of Large Language Models

A practical and pragmatic discussion of the levels of risk and complexity in the customization of large language models. Many organizations are using LLM technology to build customized chatbots, RAG tools and content generators. However, many organizations do not have a full understanding of the options and levels of risk and development complexity that come from LLM customization and deployment.

In the contemporary landscape of artificial intelligence deployment, a structural shift is occurring: base models are becoming increasingly capable out of the box. Instruction-following performance, contextual reasoning, retrieval integration, and domain adaptability have improved to such a degree that many historical justifications for invasive model modification are steadily eroding. This evolution necessitates a corresponding philosophical and governance framework—one grounded in the principle that greater customization introduces greater uncertainty, greater liability, and a proportionally greater need for validation and risk controls.

At its core, the responsible deployment of large language models should be guided by a hierarchy of invasiveness. Each successive layer of intervention introduces deeper system coupling, increased behavioral unpredictability, and escalating regulatory, operational, and reputational risk. Accordingly, risk management should not begin at the level of model alteration, but rather at the least invasive layers of interaction and configuration.

Continue reading

The Narrative About AI Triggered Job Loss is Speculative and Irresponsible

We are seeing an increased public narrative about the potential for job losses from AI deployment. These claims receive a great deal of media attention and are rewarded in the social media landscape for being as pessimistic as possible. Mass job loss remains highly speculative and many claims skew to the highly implausible. But this is causing mass harm.

The increasingly popular narrative of inevitable, catastrophic, long-term job loss due to artificial intelligence is not grounded in robust empirical evidence. It is overwhelmingly speculative, framed in worst-case abstractions, and presented to the public with a level of certainty that far exceeds what the data justifies. That alone would be intellectually questionable. But the deeper issue is ethical: the psychological and social harm caused by repeatedly presenting extreme scenarios as near-certainties.

There is a very real human cost to this discourse. People are not reading these forecasts as academic hypotheticals. They are internalizing them as personal futures. Students reconsider career paths. Mid-career professionals experience anxiety and loss of motivation. Workers in already uncertain labor markets feel prematurely obsolete. This is not a trivial side effect. It is a measurable psychological burden placed on millions of people based on projections that remain deeply uncertain and, in many cases, methodologically weak.

Serious economic forecasting requires discipline, historical grounding, and humility about technological diffusion. What we are instead seeing in many public conversations is a pattern of extrapolation from capability demos directly to labor market collapse, skipping entirely over the realities of workflow integration, governance constraints, liability frameworks, organizational inertia, and economic adaptation. That is not analysis. That is narrative acceleration.

Continue reading

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1

Due to the length of this detailed topic, it will be broken into multiple parts. One of the reasons this post is so long is the extreme entrenchment of incorrect views, and therefore, a need to provide detailed explanations of why they are wrong.

As written about earlier, Warren Buffet is one of the worst out there when it comes to spreading misinformation and unnecessary alarm about cyber security risks. He’s not the only one, however. There seems to be an incessant and rather insane cry of “Well, there are third party risks and they could be systemic. Lets throw our hands up in the air and say there is nothing we can do.”

Of course, this is not the case, in the finite and artificial world of cyber security, no risk is insurmountable and all can be understood. Third party risks come from the fact that so many organizations are dependent on various third parties, such as vendors and contractors. Even clients and customers can be a third party risk, because some organizations rely on a relatively limited number of clients.

In this video-accompanied post, I will do my best to provide detailed information to refute this dangerous and deeply entrenched idea.

Lets be clear on something, this is not new or unique to cyber:
There is nothing new or novel about this concept at all. Some policyholders have always been dependent on a limited number of vendors or service providers. Even in the years before cyber security, a major failing of the power grid, as happened in 2003 and 1977, can cause widespread loss across a large area. A single storm can impact a huge area, or a bad hurricane season can bring devastating storms to a large area. That’s what a systemic risk is.

However, in cyber security, all systemic risks can easily be detected ahead of time, if we care to look. They’re artificial, based on the relationships we choose to have and the artificial, man-made, engineered systems we use with the human-created, anthropogenic, artificial, man-made, ARTIFICIAL RISKS. And therefore finite and easy to understand. It’s always easy to know your risks, when they are in engineered systems you own, right?

Continue reading

Cyber Insurance Applications Revealed

The moral failing of insurance that pays ransom regularly, makes no attempt not to, affirmatively disengages leaders and funds terrorism should be obvious, but many argue with me, stating that insurers are doing the best they can, have incomplete data, or that they are improving.

Unfortunately, they’re not. There have been a few small measures taken, mostly just in terms of wording changes. Not a dime has been invested in enforcement or compliance management.

To show how negligent these insurance companies have been, it’s important to take a look at the applications they have for cyber insurance. These applications represent all that these companies have, in terms of policy controls. It’s abundantly clear that no adult with any idea how any of this works wrote these. There is never any other enforcement. Even large clients do not receive independent assessments or audits. These “requirements” are not generally enforceable, do not create a call to action and, just plain won’t ever work. Money will continue to be lost until even the most minimal efforts to do otherwise are made.

Cyber insurance is considered a loss center (for some reason) and for this reason it gets zero investment and the underwriters who end up on this line are typically the lowest achievers. That’s truly the opposite of what is needed here.

These applications seem to be current, although some have not been updated in years. I do not think it is at all unreasonable to say that those who were responsible for writing the loss controls, for an insurance that paid extortion, to foreign hostile parties, should face some kind of criminal charges. This is not normal. This is not okay. It should not be normalized to have such clueless people, when professionals are avaliable.

Check out this PDF to get an idea of just how bad this situation is.

BREAKDOWN OF CYBER INSURANCE APPLICATIONS

HSB Total Cyber Insurance Application
AIG’S CYBER UNDERWRITING APPLICATION
Travelers CyberRisk Applications and Forms
Chubb Cyber And Privacy Insurance
Beazley Cyber Application
The Hartford CyberChoice Premier Application
FailSafe Cyber / Information Risk Supplement Application