Scott, if you're really interested in understanding how models are used in science I suggest you take a course in scientific modeling. Most of the issues you're bringing up will be covered in an introductory course. I obviously don't teach modeling and I don't know how else to explain some of these concepts if they're still not clear to you. It really helps to actually go through the process of designing, building, and ground-truthing some of these things.
Also check out chapter 8 of the IPCC for a discussion of how the models are built and evaluated and discussion of their shortcomings.
I guess we will never agree on this. It just doesn't make any sense to me that you are using accurate modeling of some aspects of emergent phenomenon as a justification for how these climate models must be able to accurately reproduce ANY phenomenon relating to climate.
You cannot say that emergent processes prove that you can accurately reproduce any phenomenon. Emergent processes increase
confidence that the functional responses coupling sub-models are accurate. It's hard to get proper dynamics from coupled models if you have the functional responses wrong. Likewise, when emergent phenomena are unrealistic it's a big cue that one or more of the underlying processes are wrong.
They also constrain tuning. If you change a parameter in one part of the model it changes the outcome at all coupled sub-models. If you tune one part of the model to improve the fit, you can improve or reduce the skill at replicating other processes. Obviously tuning an area that decreases the overall skill of the model isn't a good idea.
We are, literally, learning everyday about new things that have been ignored in these models and their significance.
Yes, but they're not game changers. When you build a model you only try to include the most important factors, not all factors. You can add as many additional factors as you want, but you usually reach a point of diminishing returns where the uncertainty increases faster than the skill. Despite all the factors that aren't included in the GCMs, they already have a lot of skill. Adding additional factors is more about increasing resolution than significantly changing predictive skill. When I'm building a model I could care less if it's perfect or if it includes all the factors I can think of. All I care about is if it has enough skill to be useful.
What if it is possible to recreate much of the emergent phenomenon if only submodel A,B, & C are included, but to accurately model the systems that come into play in the case of anthropogenic warming you need submodel D which has, as yet, not been included in the greater climate model? My point is that there exist systems that may not be needed to recreate much of the emergent phenonmenon, but they may play an integral role in the case of anthropogenic warming.
Then your model will have little skill in replicating observations.
In what way are the current climate models based around path B (anthropogenic) when they are validated against and tweaked to fit prehistoric climate proxies?
They're tuned to and validated against modern observations from the anthropogenic era AND paleoclimate proxies, not one or the other.
Again, I disagree. There are a ton of models produced that have little to no basis in reality.
Again, I recommend you take a class in modeling. Models are abstractions of reality by definition. You can alter assumptions and use unrealistic inputs to learn more about the dynamics of the system, but for it to tell you anything about a real system a model has to be based on the reality of that system. This is no different than physical experiments.
String theory is a theory and conceptual model, not a simulation, which is what we've been discussing for the most part.
That turtle model did NOT test the hypothesis. It was useful and insightful, but it did not test the hypothesis. Did the model somehow force nature to obey it's results? The only thing that could test the hypothesis is to actually monitor trends in the turtle population. I don't want to be rude, but you ought to review the tenets of the Scientific Method.
The hypothesis in the turtle experiment was one about the dynamics of the system, not the trajectory or the population. The only real-world test would be to go out and kill a lot of juveniles without killing adults and then kill adults without killing juveniles. That's not practical or ethical. Simply tracking the population trend after regulations are in effect doesn't answer the question about whether the system is more sensitive to juvenile or adult mortality.
That is the reality of the situation in lots of scientific disciplines and your idea of what constitutes a test of a hypothesis seems severely skewed by your laboratory science background. In lots of disciplines you simply cannot run controlled experiments. You have to rely on natural experiments, analogs (including models), and in some cases observation.
Again, just because your discipline uses primarily laboratory experiments, that doesn't mean all disciplines work that way. Mine and many others do not and we often don't follow the simple Popperian progression.
I guess I should have said that models do not have to actually be faithful to reality.
Neither do physical experiments except in the sense that you are using tangible objects rather than representations of tangible objects. I could perform a feeding experiment with sharks where I toss an alligator in their tank. Even though it's possible to perform the experiment in reality and get a real outcome governed by real processes it would not tell me anything about a real system because the inputs and assumptions are unrealistic.
At this point I feel like I'm beating my head against a wall on the subject of models being constrained by reality, so I'm pretty much done discussing it.
Lets say that I had a pharmacokinetic model for Penicillin. I am a doctor and I used a pharmacokinetic model to try to determine the effectiveness of using a particular dose regiment on a patient that just came into my office with bronchitis. I calculate the effectiveness using the model and it seems that the normal dose regiment would be effective. So, I go ahead an administer the Penicillin. After recieving the medicine, the patient goes into anaphylactic shock. What went wrong here? My model said the patient should be improving, not getting worse. The problem is that the model didn't include processes or inputs that address the possibility of allergic reactions.
The problem is that you're using a population-based model to try to predict an individual outcome. You're asking the model to do something that it wasn't designed to do. Your pharmicokinetic model can still be very realistic and useful without including factors that don't affect the question you're trying to answer. If you wanted to make a pharmacokinetic model for an individual you would not only include different factors, but build the structure of the model entirely different. If you didn't the model would not be skillful or useful for the intended purpose. This example doesn't illustrate any inherent problem with models, only a problem with how you can misuse them- see your next point.
I personally feel that documentation of the model's shortcomings is inadequate or underemphasized.
Again, I'll point you to chapter 8 of the IPCC.