Ironically, writing models has kept me from addressing modeling for the past few weeks.
First, the model are being validated against a VERY short period of anthropogenic warming (i.e. the last 50-100 years). I fail to see how they can produce a reliable validation based upon data from such a short period time. Validation against non-anthropogenic climate changes (i.e. the reconstructions) doesn't really bear much significance when the processes relating to anthropogenic climate change differ from those relating to non-anthropogenic climate change.
There's a lot more to validation than just comparing R values. A big part of it is matching emergent patterns to real life processes. Do you get deserts and snowfall where they should be? Do you have upwelling zones, circulation cells, and convergence zones where they should be? Do you get realistic weather fronts and seasonality? Do you have a realistic troposphere and stratosphere with appropriate temperature trends? Do you get the right biomes in the right places? These things aren't explicit in the models. If you get the governing processes largely right, then emergent patterns should be fairly close to reality AND the data series produced should match observations. If you don't have the relationships right and you're just curve fitting, it's pretty hard to match the emergent patterns.
That is not necessarily true at all. These complex climate processes can and often are non-reversible along the same pathway. Different processes can be encountered going in different direction between two points. The pathway in one direction is not always the same as the pathway in the opposite direction.
Yes, and it doesn't matter because you define the relationship between parameters, not the process to get there.
For the example of oceanic CO2, you define the relationship between CO2, temp and pressure. If you change temperature or CO2, the other parameters change accordingly regardless of which variable you change. Then a separate sub-model takes the output from that model and calculates temperature based on forcings present, which includes CO2. Another separate sub-model modifies the growth rate of the biology based on the output of those models. The output of each sub-model becomes the input for the other models on the next iteration.
Mathematically, (if it was just those 3 sub-models) it would be something like:
x= f(y,z)
y= f(x,z)
z= f(x,y)
It's nothing more than a coupled model with lots of parameters. They exhibit hysteresis naturally, especially as you add more parameters. Even simple, single-line models (e.g. rN+[1-(K/N)]-aN) will produce path dependent outcomes.
How do you make a direct observation of all of the processes that play a role in climate change? You can't. They are forced to make assumptions and estimations due to a lack of data. If they had all the data they needed, they wouldn't have to make assumptions and estimations. Due to this lack of data, these assumptions are "justified" by logic rather than direct observation.
You don't need to make direct measurements of every parameter to justify them.
You can derive the value of some parameters based on the values of others. If you were using the formula above to create a population model in the real world, you can't go out and directly measure "r" or "K." You derive them by measuring N.
You can also use sensitivity analysis, and yes, logic, to constrain values that you don't have values for (or have questionable measurement of). If for example, you wanted to know the sensitivity to doubling in CO2, you can't go out and measure it directly. You would use lab experiments and radiative physics to give you a logical range of values. That was actually some of the first work done on global warming, about 100 years ago. Then you take that range of values and perform a sensitivity analysis to determine which actually make sense. In the case of doubling CO2, a sensitivity of less than about 3 deg/doubling doesn't give you an Earth-like climate either in the observational or historical era.
You also do sensitivity analyses on known values to see how much difference measurement errors make.
In my experience, most scientists are skeptical of the reliability of models, as they should be.
Most scientists are skeptical of EVERYTHING since that's what science entails. But skepticism does not equate to rejection out-of-hand. Even modelers have the saying that all models are wrong, but some models are useful.
An experiment, by nature, MUST have a basis in physical reality, whereas modeling does not have to fulfill this requirement. I think most (good) scientists see modeling as a tool to be used in the exploration of the interactions going on within complex systems, but not as some kind of experiment. Experiments, again by nature, are able to confirm or refute a hypothesis. Models have no ability to do this.
An experiment is a test of a hypothesis- nothing more, nothing less. There is no requirement for a lab experiment to be based in physical reality any more than any useful model has to be. The Miller-Urey experiment comes to mind. We know the experimental conditions were unrealistic, but they still provided a test of the hypothesis that you can spontaneously create complex organic molecules from simple ingredients. You can put unrealistic input into models too, but in order for the model to tell you anything about the system, the structure of the model has to be based in reality. If it's not based in reality then it doesn't tell you about anything about the system you're trying to model.
Not every science has the luxury of studying systems that we can set up on the lab bench or run repeated tests on. When you can't do that you use a model to test your hypothesis. The fact that there are statistical tests like AIC designed to tell you which hypothesis is best supported by a model argues against the point that models aren't used to test hypotheses.
As a chemist, I would say that I do not consider them a fundamental part of chemistry.
Kinetics, bonding affinities and orientations, electron locations- all things I learned in fundamentals of chemistry back in high school. They're all models, even if you don't call them that.
In biochem, things like protein folding, enzyme activity, DNA replication- all based on models and often predicted by computer simulations.