Am I wrong is saying that such a device and method could provide us with accurate temp readings and we could then model the future based on the huge amount of data we have collected?
If you only have the record from your backyard you can't really say much of anything regardless of how long you collect. You don't have any replicates or other locations that allow you to separate the signal and the noise. If you have a few hundred of these placed at various places around the world then you could make a very good plot of global temperature trends, but you really can't make any predictions about the future because you don't know what caused the changes in the data. To model the future you need to know the mechanisms of change. To do that you would need to look at measurements of other factors like cloud cover, wind, rain, amount of light, etc and see how they interact.
I think it's important to point out that the proxies are simply measurements. You can tell past trends with them, but you can't use them singly to forecast. When combined they can be useful to understand how factors influence each other and to predict whether a trend is likely to continue, but they don't factor in feedbacks. Modeling is a separate discipline that does use proxies, but really relies more on physics. Without the proxies you could still in theory make models, but you would only have about 200 years worth of data to verify them against. So with enough data but no models or with models but no proxy data you could still make good predictions, but when you combine the two you can refine those predictions.
It's also important to point out that when real world data is used to create models, the same data can't be used to verify them.
Again, my skepticism comes from too many years working with people who were paid, in effect, to have a particular outcome notwithstanding the data.
That's the difference between a scientist and a consultant. :lol: In all seriousness though, that is a big problem with privately produced research and to a much smaller degree in "pure" science. In academia usually the researchers have a set salary regardless of what kind of research the produce or grant money. There is a very low cap (usually a month's salary) placed on the additional money they can receive from grants. To get those grants, each application is reviewed independently and without the reviewers having any knowledge of who wrote it (to remove bias based on previous work). The money is usually given out prior to the start of the research or in phases based on progress, but it's never dependent on the end result. When you go to publish you have to go through more review where the job of the reviewers is to find methodological or reasoning errors in the paper. If it stands up to that it's still subject to audit and refute by other researchers after publication. Unless you're good enough at fudging numbers to avoid other experts noticing, it's not likely to pass. Sometimes it happens, but usually it's restricted to more obscure journals. The process is designed to be self correcting whether the errors were due to limited knowledge or intentional fudging.
What I mean is that there seems to be some pretty good stuff recently, (for the sake of argument, 10,000 years). But beyond that, the analysis becomes very, very weak. My brain says well when you are missing such a huge piece of the picture, then one needs to become very careful.
Well it's a lot longer than that, but the real question is how long does the record really need to be? The amount of different forcings changes with time, but the physics behind their impacts doesn't. Like I mentioned before, even without the proxy data we can still create the models based on physics. 100 lbs of CO2 today has the same impact as 100 lbs of CO2 100,000 years ago. We have excellent data going back long before civilization which gives us plenty of time to look at what was going on without the influence of humans. You could do away with all but the last 20,000 years of data or add another million years to the tail end and it wouldn't change the understanding or predictions.
I guess what I am missing is the confidence in the analysis of the proxy data which apparently shows changes associated with human history (industrial revolution). Is that for instance, simply correlation? Are there other possible explanations aside from human activity?
Not that we know of. We know roughly how much sinking/ production of greenhouse gases and aerosols we can be accounted for by natural events like volcanoes. Things like orbital changes, sun spots, and feedbacks like clouds are also accounted for. Still we can't account for the discrepancy between the real world numbers for the 20th century and the models unless man made factors are included too.
Correlation only shows relationships, never causes. We can show that there is a correlation between atmospheric CO2 and temperature, but either one can be a cause for the other. The confidence in the causation comes from estimates of anthropogenic greenhouse production combined with physics, which ultimately give rise to the models, which are then in turn confirmed by observational data. Basically it's a result of the intertwining and agreement of various pieces of evidence, any one of which could be taken away and not cause the collapse of the theory.