So since so many people have asked me for comments on this:

https://www.barrons.com/news/google-says-ai-weather-model-masters-15-day-forecast-cdc5793d

I felt I should add some context to what is being pushed here.

First of all, I believe artificial intelligence will improve weather forecasting. The current models are probably good enough for planning out to a week, about 90% of the time. But that means that in a year there may be 30 times, or 10% of the time that is wrong. And I mean flat-out wrong. Someone who is investing money can make a lot of money if he or she takes the tie on the forecast 90% of the time but can pick out the 10% of the time when the forecast is going to be wrong. My company just did that forecasting from September based on meticulous hurricane analogs that we use that there will be major cold in the early part of the winter as we have just seen. Yet climate models 3 weeks away still weren’t seeing it. And neither was the vaunted AI. It goes out to 15 days but until about 8 to 10 days before it did not see what we were telling our clients about from back in September. It also did not catch on to the hurricanes until the hurricanes were obvious, well after we put out this tweet on September 8. more than 15 days the author is claiming AI mastery.

This was the Euro ensemble. It did not have a hurricane on it, but had the Sea level pressure pattern, precip pattern, and vertical velocity pattern for a 15-day period that was shooting fish in the barrel as fas as knowing hurricanes would show up, I used the same technique for Ian, The storms were not on the map, and the AI had no individual storm, But the pattern matched up against the  previous patterns that said they were coming.

image.gif But how could it see it it only goes out to 15 days in the first place ( the AI model)  And when it did, having a hurricane on one run in one place and in another place on the next run is not something that is accurate.   You can’t just make 15 forecasts and chose which one was right, Most of the people claiming mastery probably are not every day global forecasters that have to look everywhere so they know when and where modeling is wrong.

My point is that there is a lot of propaganda coming out about AI. And like a lot of the propaganda we see used in the media today there are elements of truth that are used to make you believe the extension they are trying to point toward. The recent Barrons article is classic. It does not show any of the verification scores. It claims mastery out to .day 15. Any meteorologist watching the AI models, and there are several of the AI models, and that come out four times a day, knows they are all over the place the same way as the current models are. A classic example was last year when an AI had hurricane Lee moving up Narragansett Bay a week away. Well, hurricane Lee wound up going 400 miles east of Providence.  But what they will use is what we call selective verification. They will cherry-pick the right and the wrong, not telling you about the wrong. Think about it If there are four forecasts a day and let’s say you’re looking at day 10. That means it has 40 tries at getting it right that you can choose from till that time.  If its 15 days, its 60 runs.  If there are 5 AI models, that’s 300 runs. Certainly one of them has to get it right one time . Meanwhile, a meteorologist who has to predict the weather on day 10 better  be getting it right for his clients. The great crucible of forecasting is when you have to hit the forecast or your clients aren’t going to pay you.

There is a move afoot to eliminate private-sector forecasting.  Its not stated publicly, but ever since Accuweather came on the scene, there has been a silent battle going on, it is why people opposed Accuweather CEO Barry Myers’s nomination for director of NOAA so much.  Barry would have had to take a big pay cut to be the head of NOAA.  If he were all about himself, why would he do that?  But the fact is, love Accuweather or hate them, they became a big pain in the butt for the NWS because Dr Joel Myers, who founded Accuweather, figured if he could get the best forecasters under one roof, he could efficiently and accurately forecast the weather for profit all over the country. Of course, what Barry went through was small beans to what we see going on now with some of the nominees today,  But now the AI push would eliminate all private voices with time if you can convince people that it is always right,  So its a level up from what Accuweather had to face, and it’s why most weather companies are now really media and data distribution outlets rather than the good old time forecasters that would get into a fight over 2 degrees on a day 5 high temperature in NYC, or whether there would be snow flurries in Charlotte ( ah the good old non politically incorrect days were times I cherish. People who nailing a forecast was their mission in life and would die on any hill to compete. Someday my Opus, “For the Love of the Weather”, will talk about it).  So how can you put to rest the private voice on weather and climate?  By simply convincing everyone something is correct all the time and it can’t be beaten   Bu this is nothing new. Back in the. 80s/90s, the Hurricane Center was pushing this idea of a United front in the media. In other words, just say what we say because we don’t want confusion. Never mind that someone outside the hurricane center might’ve had the correct answer before the hurricane center did. That is not to say they are like that now.  I think they realize with so much information out there, they can use it to help their ideas.  I do the same thing.  If I see someone with an idea that challenges mine, I double down to see if they have a point. While I may disagree occasionally with the hurricane center I love what they do and how much information, if you actually listen to what they’re saying, is dispersed. But the point of the matter is that when you have one authority when you are dealing with an infinite system, you will eliminate the chance that someone other than that authority has the correct answer.

But the real telling part of the article that appeared in Barrons, one where the author certainly did not contact me who is now the oldest and most experienced private sector global forecaster, is that they make a point that it is extremely important that the AI is so good because of man-made climate change leading to more extreme weather. And that’s where they tip their hand   First of all extreme weather is very good for my business. And since the knock on me is that I’m always looking for the weather to go to extremes, you would think someone like me would be rooting for extreme weather. Quite frankly the weather more often than not is boring. It’s more boring than it was in the 30s 40s and 50s. Now I realize I’m a big weather geek, so I get my jollies looking at past weather events, but if you go back and look at the available maps you would be asking yourself how the heck that actually happened with some of the extreme events that we had in the 30s 40s and 50s?  But do you think this author went back and looked at that? Of course not. So they simply swallow the Kool-Aid. Think about this. The weather is essentially an infinite system. We have a database of weather events that have occurred before. But we don’t measure them the same way we measure them now, so right away the input from past weather events into an AI model has to be questioned. In addition because of the warming of the planet which of course I believe is largely natural, The feedback is different than it was. By that I mean more energy available leads to a different result than less energy available,  More chance for a reaction that would be DIFFERENT, NOT WORSE OR MORE, from 40 years ago, both in extreme weather and Less extreme.  What if LESS EXTREME events are becoming more common?  No one keeps score on that. And what is less extreme?  No one writes a book on all the nice weather there is, only on bad weather, and then blames man-made climate change.. So because of that, it becomes very difficult for any model to try to look at past events and come up with an exact answer for it. I’ll give you an example: Suppose the AI model said that hurricane Milton was going to hit 20 miles north of Tampa Bay rather than 20 miles south of Tampa Bay. That’s only a 40-mile error but it’s probably a $50 billion extra error. Why? Because the storm hitting 20 miles south of Tampa Bay means no 15-foot storm surge up the bay would’ve made it much worse. I watched the AI models and five days out there was a group of them hitting 20 miles north of Tampa. So basically there were runs of the model that made $50 billion error. In practical terms is that accuracy? And believe me, Skill scoring on modeling can be made to look very very good. They were all different ways of doing it which I’m not going to get into here.

But in the end, be careful. You have been sold bills of goods one after another over the past 10 years, and this is more of the same. It has elements of truth, but is a one-sided presentation. Skill scores in classrooms are far different from the real world.