<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
There's a lot more to validation than just comparing R values. A big part of it is matching emergent patterns to real life processes. Do you get deserts and snowfall where they should be? Do you have upwelling zones, circulation cells, and convergence zones where they should be? Do you get realistic weather fronts and seasonality? Do you have a realistic troposphere and stratosphere with appropriate temperature trends? Do you get the right biomes in the right places? These things aren't explicit in the models. If you get the governing processes largely right, then emergent patterns should be fairly close to reality AND the data series produced should match observations. If you don't have the relationships right and you're just curve fitting, it's pretty hard to match the emergent patterns.
True, model validation is a complex process. The problem with your statement is that all of those emergent phenomenon say nothing about how accurate the models are at reproducing anthropogenic global warming. I guess we could say that the models would be great at predicting the locations of deserts on a newly discovered planet, but that is not how these models are being used.
<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
Yes, and it doesn't matter because you define the relationship between parameters, not the process to get there.
For the example of oceanic CO2, you define the relationship between CO2, temp and pressure. If you change temperature or CO2, the other parameters change accordingly regardless of which variable you change. Then a separate sub-model takes the output from that model and calculates temperature based on forcings present, which includes CO2. Another separate sub-model modifies the growth rate of the biology based on the output of those models. The output of each sub-model becomes the input for the other models on the next iteration.
Mathematically, (if it was just those 3 sub-models) it would be something like:
x= f(y,z)
y= f(x,z)
z= f(x,y)
It's nothing more than a coupled model with lots of parameters. They exhibit hysteresis naturally, especially as you add more parameters. Even simple, single-line models (e.g. rN+[1-(K/N)]-aN) will produce path dependent outcomes.
You don't need to make direct measurements of every parameter to justify them.
You can derive the value of some parameters based on the values of others. If you were using the formula above to create a population model in the real world, you can't go out and directly measure "r" or "K." You derive them by measuring N.
You can also use sensitivity analysis, and yes, logic, to constrain values that you don't have values for (or have questionable measurement of). If for example, you wanted to know the sensitivity to doubling in CO2, you can't go out and measure it directly. You would use lab experiments and radiative physics to give you a logical range of values. That was actually some of the first work done on global warming, about 100 years ago. Then you take that range of values and perform a sensitivity analysis to determine which actually make sense. In the case of doubling CO2, a sensitivity of less than about 3 deg/doubling doesn't give you an Earth-like climate either in the observational or historical era.
You also do sensitivity analyses on known values to see how much difference measurement errors make.
I'd disagree. The path taken is important because a model can model path "A" accurately while being very poor at modeling path "B". Path "B" could have greatly different processes with different rates, different intermediates, different sinks, etc. If the model is only validated against data in which path "A" is at work, then the problems that the model has with path "B" will not be evident. Now if the conditions that favor path "B" are what we really care about, then the model would be doing a poor job of telling us meaningful information about the things we care about.
Regarding the use of sensitivity analyses, a sensitivity analysis doesn't tell you whether you assumptions are correct. The lab experiments don't tell you how things really behave in a complex system like the climate. It is only telling you very specific information about a specific set of highly controlled experimental conditions. A sensitivity analysis only tells you the range of model outputs when a parameter is varied across a range of values. The true value of that parameter in the real complex system may or may not lie within that range.
Using logic to constrain values of parameters is sometimes dangerous. If you are forced to use logic to constrain parameter values, that means that you have a poor understanding of that parameter. Otherwise, you would have relied on something more concrete. Assuming things about a parameter for which you have a poor understanding can result in large errors.
<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
Most scientists are skeptical of EVERYTHING since that's what science entails. But skepticism does not equate to rejection out-of-hand. Even modelers have the saying that all models are wrong, but some models are useful.
I think this is dependent on how the model is being used. Almost all models have a fairly limited range of purposes for which they are useful. A model applied outside its useful range of purposes is meaningless. My skepticism towards climate models is rooted in the useful range of purposes. I am skeptical that these models have a useful purpose in predicting the future of anthropogenic warming.
<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
An experiment is a test of a hypothesis- nothing more, nothing less. There is no requirement for a lab experiment to be based in physical reality any more than any useful model has to be. The Miller-Urey experiment comes to mind. We know the experimental conditions were unrealistic, but they still provided a test of the hypothesis that you can spontaneously create complex organic molecules from simple ingredients. You can put unrealistic input into models too, but in order for the model to tell you anything about the system, the structure of the model has to be based in reality. If it's not based in reality then it doesn't tell you about anything about the system you're trying to model.
All lab experiments must be based in physical reality, though sometimes the applicability of the experiement may be overstated. In the Miller-Urey experiment, they tested the hypothesis that complex organic molecules could be created under a specific set of conditions. That is all they proved. The assertion that this experiment demonstrates the mechanisms of the the abiotic origin of life is an overstatement and perhaps unrealistic.
<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
Not every science has the luxury of studying systems that we can set up on the lab bench or run repeated tests on. When you can't do that you use a model to test your hypothesis. The fact that there are statistical tests like AIC designed to tell you which hypothesis is best supported by a model argues against the point that models aren't used to test hypotheses.
I disagree. A model can never test a hypothesis that relates to a physical reality. Lets say I develop a hypothesis that the population of the world will increase by 10% in the next 5 years. Now, lets say I develop a model that demonstrates a 10% increase over 5 years. Did that model test the hypothesis? No. The only thing that will test the hypothesis is the observation of the population over the next 5 years.
<a href=showthread.php?s=&postid=15755736#post15755736 target=_blank>Originally posted</a> by greenbean36191
Kinetics, bonding affinities and orientations, electron locations- all things I learned in fundamentals of chemistry back in high school. They're all models, even if you don't call them that.
In biochem, things like protein folding, enzyme activity, DNA replication- all based on models and often predicted by computer simulations.
Ahhh..., but maybe something they never told you in high school is that all of those types of models are just used to simplify calculations for day-to-day routine chemistry problems. All of these models are used by chemists with the complete understanding that they are all wrong. In fact, you can find many, many, instances where these models fail horribly. In more advanced chemistry classes, professors nearly always make it very clear the problems, limitations, and useful range of the models.
This is something that is poorly understood in climate modeling and also very poorly communicated by the climatologists.
If you actually examine how these models are being used in biochemistry, you'll find that these models are being used to either propose a mechnism for a reaction or to narrow down possible of candidate biochemical molecules of interest. In the case of determining a reaction mechanism, the models are utterly useless unless much experimental data is available. The modelers tweak orientations, placements of solvent molecules, etc. until they can get pretty close to experimentally determine values. This does not test the hypothesis about the reaction mechanism, it simply provides insights into the reaction process. In the case of narrowing down possible useful biochemical species, these models are only being used as a cheaper alternative to actual experiments, not to determine the truth. That can only be acheived through experiment. In fact, the models are often incorrect and researchers are thrown red herrings by the models.
Scott