This was an eye opener - cont.

Status
Not open for further replies.
Well said greenbean. You've more patience with this than I do :thumbsup:

BTW, I've been following this stuff since I started college in the early 80's. Seen lots of changes to the data and methods, new things being learned on how it all works, and you know what? All I've seen as simply strengthen and reinforced the fact that man is indeed having an impact on the environment. While it might be the driving force, it certainly is a force that is very definately present.
 
Scott, if you're really interested in understanding how models are used in science I suggest you take a course in scientific modeling. Most of the issues you're bringing up will be covered in an introductory course. I obviously don't teach modeling and I don't know how else to explain some of these concepts if they're still not clear to you. It really helps to actually go through the process of designing, building, and ground-truthing some of these things.

Thanks for the suggestion, but I am well aware of how scientific modeling works. I do it for a living. I just had a peer-reviewed paper published in which I compare several competing toxicity models and put forth a rather simple model of my own design.

You cannot say that emergent processes prove that you can accurately reproduce any phenomenon. Emergent processes increase confidence that the functional responses coupling sub-models are accurate. It's hard to get proper dynamics from coupled models if you have the functional responses wrong. Likewise, when emergent phenomena are unrealistic it's a big cue that one or more of the underlying processes are wrong.

They also constrain tuning. If you change a parameter in one part of the model it changes the outcome at all coupled sub-models. If you tune one part of the model to improve the fit, you can improve or reduce the skill at replicating other processes. Obviously tuning an area that decreases the overall skill of the model isn't a good idea.

I'll agree with all of that. Replication of emergent phenomenon are probably a requirement of a valid model, but do not actually prove that a model is valid. That was all that I was trying to say.

Yes, but they're not game changers. When you build a model you only try to include the most important factors, not all factors. You can add as many additional factors as you want, but you usually reach a point of diminishing returns where the uncertainty increases faster than the skill. Despite all the factors that aren't included in the GCMs, they already have a lot of skill. Adding additional factors is more about increasing resolution than significantly changing predictive skill. When I'm building a model I could care less if it's perfect or if it includes all the factors I can think of. All I care about is if it has enough skill to be useful.

I'm not sure how you can say that nothing we will find out about the climate could be a "game-changer". Much of what we are currently discovering is not an entirely new phenomenon, but rather we are discovering that phenomenon we already knew about is much more important than we once thought.



They're tuned to and validated against modern observations from the anthropogenic era AND paleoclimate proxies, not one or the other.

Yes, but they are tuned to millions of years of paleoclimate proxies and tuned to ~50-100 years of data in the anthropogenic era. That skews the validation process. Validation against millions of years of paleoclimatic data says nothing about how well the model performs at predicting anthropogenic climate change. So that leaves us with 50-100 years of data in which anthropogenic warming could be at play. Is 50-100 years of modern data sufficient to guage the accuracy of these models at predicting anthropogenic warming? I don't know and I don't think anyone really knows. That is a difficult question to answer.



Again, I recommend you take a class in modeling. Models are abstractions of reality by definition. You can alter assumptions and use unrealistic inputs to learn more about the dynamics of the system, but for it to tell you anything about a real system a model has to be based on the reality of that system. This is no different than physical experiments.

String theory is a theory and conceptual model, not a simulation, which is what we've been discussing for the most part.

I'm not saying that modelers don't seek to replicate reality, but those models are simplifications of reality. Sometimes the simplifications make the models poorly reflect reality. Sometimes a modeler is working with parts of a system that are too poorly understood or are too complex to model in detail, so it is modeled as a simplified system that is not faithful to reality under all circumstances.

The hypothesis in the turtle experiment was one about the dynamics of the system, not the trajectory or the population. The only real-world test would be to go out and kill a lot of juveniles without killing adults and then kill adults without killing juveniles. That's not practical or ethical. Simply tracking the population trend after regulations are in effect doesn't answer the question about whether the system is more sensitive to juvenile or adult mortality.

That is the reality of the situation in lots of scientific disciplines and your idea of what constitutes a test of a hypothesis seems severely skewed by your laboratory science background. In lots of disciplines you simply cannot run controlled experiments. You have to rely on natural experiments, analogs (including models), and in some cases observation.

Again, just because your discipline uses primarily laboratory experiments, that doesn't mean all disciplines work that way. Mine and many others do not and we often don't follow the simple Popperian progression..

I would say that the model was useful in exploring/elucidating the turtle populations dynamics and mechnisms, but the model still did not test a hypothesis. Just because the model replicates certain observations, does not mean that the mechanisms at play in the model are the say mechnisms causing the observed effect.

The turtle example is similar to how modeling is used in computational chemistry to explore/elucidate reaction mechanisms. I have never heard a computational chemist claim that their model proves (i.e. tests the hypothesis) that a particular reaction mechanism is occuring. They may say that the a particular reaction mechnism is supported by experimental data, but they never make the claim that a hypothesis is tested by the model.

Look in any introductory science textbook. Hypotheses are tested by experiment or OBSERVATION. ALL sciences for which experiments are not a possibility, rely on observation to test hypotheses. Astronomers can't rearrage the universe, so they have developed telescopes to observe. Marine biologists can't re-create the entire ocean in a lab, so they have dive gear to allow them to go out an observe. In all cases (not just some), science is performed through observation, experimental or otherwise.

Neither do physical experiments except in the sense that you are using tangible objects rather than representations of tangible objects. I could perform a feeding experiment with sharks where I toss an alligator in their tank. Even though it's possible to perform the experiment in reality and get a real outcome governed by real processes it would not tell me anything about a real system because the inputs and assumptions are unrealistic.

At this point I feel like I'm beating my head against a wall on the subject of models being constrained by reality, so I'm pretty much done discussing it.

It does tell you something about a real system. Yes, it might not be a system that you frequently encounter outside the experiment, but it is a real system. There are saltwater crocs. Maybe alligators could be considered a proxy for crocs and that experiment helps you understand something about the interactions between saltwater crocs and sharks.

The problem is that you're using a population-based model to try to predict an individual outcome. You're asking the model to do something that it wasn't designed to do. Your pharmicokinetic model can still be very realistic and useful without including factors that don't affect the question you're trying to answer. If you wanted to make a pharmacokinetic model for an individual you would not only include different factors, but build the structure of the model entirely different. If you didn't the model would not be skillful or useful for the intended purpose. This example doesn't illustrate any inherent problem with models, only a problem with how you can misuse them- see your next point.

Ok, I think you are missing my point here. Lets say that the model was tuned to the individual. Liver function of the individual was assesed using enzyme levels. Kidney function of the individual was assesed from urine sample. Organ volumes of the individual were assesed using some kind of imaging system. Et cetera....It doesn't change the result of my example. You can have models that include all of the necessary structure to accurately reproduce phenomenon under most circumstances ( like millions of years of paleoclimatic data), but fail horribly under other circumstances. Again, I am not sure that 50-100 years of data from when anthropogenic warming is at play is sufficient to validate these models for the purposes of predicting the future of anthropogenic warming. If you are certain that 50-100 years of data is sufficient, then please enlighten us as to why you feel this way.

Scott
 
greenbean:

"You honestly think we're so bad at going out and taking a temperature reading with an XBT or even a bucket and thermometer that you can do it significantly better from 200 miles up?"

What I am saying, is that that from satellites we can measure:

Atmospheric Vertical Temperature & Moisture profiles
Total Ozone profiles
Global Albedo
Cloud Optical thickness
Cloud top height, pressure & temperature
Land & Sea surface temp
Ocean color/chlorophyll content
Aerosol optical thickness
solar-reflected and Earth-emitted radiation from top of the atmosphere to the surface

etc.
over the entire surface of the planet.

Are you suggesting that this is no better than a handful of measurements sprinkled over the parts of the planet where the humans are??

I am not saying that we don't have temperatures for the last 300 years.

I AM saying that we do not have all of the OTHER data listed above for the last 300 years.
So how do the modelers KNOW what all of the forcings were for the last 300 years? They must infer what most of them were USING THE MODEL.



Unrelated to climate change, but another example where the scientists got the model wrong:

Mystery Emissions Spotted at Edge of Solar System

Quote:

"scientists said they were surprised to discover the striking band in IBEX's sky maps, because no models had predicted such a pattern beforehand"

"some fundamental physics is missing from our understanding."


http://www.space.com/scienceastronomy/091015-space-bubble.html

Stu
 
Are you suggesting that this is no better than a handful of measurements sprinkled over the parts of the planet where the humans are??

I am not saying that we don't have temperatures for the last 300 years.

I AM saying that we do not have all of the OTHER data listed above for the last 300 years.
So how do the modelers KNOW what all of the forcings were for the last 300 years? They must infer what most of them were USING THE MODEL.
No, no, and no.

Satellites are definitely useful for increasing spatial resolution. They're not necessarily more accurate than surface readings though. Throughout this thread and the last you've claimed that we only REALLY know temperature for the last 30 years of the satellite era. In the US alone we have something like 1200 surface stations measuring temperature, so there's hardly a lack of spatial coverage. Still the satellites and surface records disagree about the temperature anomalies and even which years were the hottest. Both have had multiple corrections. Which record is the right one?

Satellites are definitely useful. Most of the things you listed, satellites have helped us understand a lot better. However, a lot of it we did know before satellites. Some of it doesn't go into climate models anyway because it's subgridscale.

The energy budget of the planet has been worked out for about 100 years. Arrhenius used it for his CO2 paper. Satellites reduced uncertainty, gave us better spatial coverage, and allowed us to measure changes in near real-time, but they didn't change the forecast.

The models had atmospheric temperature and moisture profiles right before the satellites even did.

Things like cloud height, temp and pressure are nice to know, especially for forecasting weather, but they're not terribly important for building climate models because they're smaller scale than the resolution of models.

Things like ocean chlorophyll content, salinity, pH, atmospheric composition, temperature, GCRs, TSI, ice sheet extent, vulcanism, wind, ocean circulation, hurricane occurrence, precipitation, and most of the other major contributors to climate are represented in proxies (and most of them in historical records much longer than 30 years). They are NOT inferred from models and can be determined without satellites. I can literally look at a sand sample and tell you the temperature, salinity, pH, nutrient levels, productivity, how often hurricanes have hit, how much rain there was, which way the wind was blowing, and whether there were ice sheets in the area for each year that the sample covers. There are also lake varves, speleotherms, tree cores, coral cores, ice cores, and half a dozen other proxies that tell you a lot about the forcings prior to and including the past 300 years. When you point out that it has been warmer in the past or that CO2 lags temperature- that information comes from proxies. Again, if your question about how we know what we know about the past is an honest one I'd recommend "Two Mile Time Machine" by Richard Alley. He does a great job of explaining how proxies are used and what they can tell us.

Unrelated to climate change, but another example where the scientists got the model wrong:
ALL models are wrong. Some models are useful.

Newton's model never predicted that light could be curved by gravity or that time could dilate. He got it wrong (and so did Einstein), but Newtonion physics are still useful for designing fighter jets and sending rockets to the moon.

The climate models are wrong too, especially on short timescales and small spatial scales. They can't tell you when an El Nino will occur, what next year's temperature will be, or when there will be an especially bad hurricane season. But they can tell you trends in parameters. They've proven their utility at doing so by hindcasting the past several thousand years and predicting the past 20. The odds of them suddenly coming unhinged and getting the trajectory of the trends wrong for the near future is extremely small. Statistically, it's less than 5%. Absent some major factor that suddenly kicks in
 
Again, I am not sure that 50-100 years of data from when anthropogenic warming is at play is sufficient to validate these models for the purposes of predicting the future of anthropogenic warming. If you are certain that 50-100 years of data is sufficient, then please enlighten us as to why you feel this way.
We're not talking about 50-100 data points vs. several thousand.

The anthropogenic era is generally accepted to have started ~1890. That's almost 120 years, all of which are covered by the instrumental record. Those readings are taken on at least a monthly basis- most much more frequently. So you're talking about close to 1,500 time periods when you consider only monthly readings. Spatially, you're talking about roughly 1,600 stations with records that long. That gives you a whole lot of data points to test the spatial and temporal accuracy of the models over the modern anthropogenic period.

Proxies on the other hand, tend to have seasonal to annual resolution, so to test the same number of time periods you have to have a reconstruction about 750-1,500 years long. Spatial coverage is also far less than the instrumental period, especially as you go farther farther into the past. Most of the reconstructions, especially the longer series (which are the ones where CO2 lags temp), are fairly new too- less than 20 years old. So the modelers didn't have many data points to compare hindcasts to until fairly recently. In effect you have models that were built mostly based on the observational record with a few proxies to test against. Only fairly recently has the situation reversed. Still, some of the models that predate the reconstructions are on track with reality.

The addition of more and longer proxy reconstructions hasn't changed the forecast, only made it more certain.

I'm not a statistician, so I don't know the best way to test that the models are equally good for the distant past and the instrumental record, but if I wanted to test that on my own I would probably use AIC or BIC to see if there's a significant difference in the data-model match between time periods.
 
I recall greenbean stating this:

"All volcanoes combined pump out about 200 million tonnes of CO2. Fossil fuel combustion alone is responsible for about 27 billion tonnes a year. That's several orders of magnitude more than all of the world's volcanoes produce each year.

Factoid like this one reflect ideological arguments rather than skepticism and interest in the truth. Even if it was true it would be irrelevant to the issue. Natural sinks and sources of CO2 are roughly in equilibrium so unless there is a dramatic increase or decrease in vulcanism (much bigger than a single large eruption), the amount of volcanic CO2 being cranked out has almost no effect on climate."

Now, I will agree that the response was directed at a comment about "One volcanic eruption"

Here is an article that says volcanism CAN have a HUGE impact on climate.
It also makes some interesting statements about CO2 balance, sources & sinks.

"Volcanoes Played Pivotal Role In Ancient Ice Age, Mass Extinction"

http://www.terradaily.com/reports/V...e_In_Ancient_Ice_Age_Mass_Extinction_999.html

Stu
 
It's STILL not the sun. Science works on the principle of parsimony. The best explanation is the one that fits the observations the best without assuming unknowns. "It's the sun" is not the parsimonious explanation.

The warming trend is greater at night than during the day. The stratosphere is cooling as the troposphere warms. Neither of those are consistent with warming due to changes in solar output and both are consistent with greenhouse warming.

CO2 is known to be a greenhouse gas. CO2 is measurably increasing. By known mechanisms, that increase can roughly account for most of the temperature increase observed. Blaming it on the sun does not explain why CO2 is not having the expected effect.

Beyond that, there is no known mechanism by which the small to non-existent secular trend in various solar indexes results in the observed warming.

So for the sun to be the answer you have to assume a mechanism by which the greenhouse effect of increasing CO2 is neutralized, assume a mechanism by which the sun can cause the magnitude of observed warming, and assume a mechanism by which the sun can produce the pattern of observed warming.

Ok Stu, so ol' dude has a hypothesis. Now he has to explain why his hypothesis fits the observations better than other explanations. He can start by demonstrating a long-term global temperature trend on Mars as opposed to a short-term regional trend. There is data that suggests otherwise- http://www.nature.com/nature/journal/v435/n7039/full/nature03561.html

Then it would be useful to document a concurrent trend in solar output and a mechanism by which it would cause the temperature trend. One glaring issue that stands out is that total solar irradiance has been on the declining portion of the cycle since roughly 2000- a point that Abdussummatov makes himself at the end of the article. It's hard to blame the sun for warming a planet during a period when it's output has been dropping. Over the last 30 years that Earth has warmed there's been no trend in solar irradiance either, just the cycle. If Mars experienced the same 30 yr warming, it's hard to explain it by pointing to a factor that hasn't shown a trend. You could point towards GCRs again, but even if we assume that they affect cloudiness on Earth and have a significant impact on our climate, Mars has almost no clouds to speak of.

There's no conspiracy to suppress the guy. The problem is that at this point his explanation of warming on Mars isn't great. To expand it to explain warming on Earth is even worse for the reasons I mentioned before.
 
No comments on the latest round of data (or lack of) from the esteemed scientists and climatologists?

Apparently, in some cases, peer review is not necessary.....keeping original data is not necessary......hand picking 3 samples (trees) that match your conclusions is necessary, while ignoring all of the rest.....

I thought you guys would be having a field day with the "cover up" going on in England and the UN.
 
There is no cover up. If you're trying to cover something up it's generally not a good idea to publish papers about it, especially in Nature. What there is is a smear campaign. There are very clear and unequivocal explanations behind almost all of the high profile emails that were released and they only hint at malfeasance if you intentionally ignore the context.

Of course peer review is necessary, which was exactly why the scientists were discussing what to do about the journal Climate Research. At least 4 sub-par papers (at least one of which was funded by the American Petroleum Institute) were funneled into publication through a single editor who was known to be sympathetic to the "skeptical" cause. The situation came to a head in 2003 when a particularly bad paper funded by the API was published. No new research was performed in the paper. It was simply a review of previous studies done by other researchers. 13 of the researchers cited in the paper responded saying that their work was misrepresented. That one paper was so bad that half of the editorial board resigned in protest of its publication. The incoming editor-in-chief, Hans von Storch also resigned after his attempt to publish an editorial rebutting the paper was blocked. It's worth noting too that he is quite clearly not on friendly terms with Mike Mann or Phil Jones (two of the major players in the emails). Even the director of the publishing company admitted that the paper had been handled improperly. NONE of these events or the reaction of the climate science community were a secret back when the events were unfolding. Coincidentally, I was reading about the whole fiasco just a few weeks ago, before the emails came out. It wasn't a secret then either. Why is it suddenly a sinister revelation?

A recounting of the events from one of the editors who resigned (published Nov. 2003).
http://www.sgr.org.uk/climate/StormyTimes_NL28.htm

The version from the editor-in-chief (published on his website in 2003).
http://coast.gkss.de/staff/storch/CR-problem/cr.2003.htm

The publisher's comment on the issue (published in Climate Research Aug. 2003)- the author's expertise is in scientific review.
http://www.int-res.com/articles/misc/CREditorial.pdf

The response by 13 of the authors who were cited in the paper who say that their work was misrepresented (published in the journal EOS in 2003).
http://www.geo.umass.edu/faculty/bradley/mann2003a.pdf

So how was this covered up and what was improper about scientists deciding not to cite or submit to the journal anymore? Do you expect them to treat a journal with a demonstrably flawed review process as equal to other prestigious journals?

As for keeping original data- the CRU analyzes data; they don't collect original data themselves. Their raw data are COPIES of the meteorological data archived by national meteorological organizations. Even if the CRU deleted ALL of their data, none of the original raw data would be lost because it doesn't reside at the CRU. That also means they don't own the raw data they use, so they cannot give it to you no matter how many people you get to pester them with FIO requests. If you want it, you have to settle for the 98% of so of it that is publicly available or you have to go to the national meteorological service that originally collected it. So again, what is sinister about getting rid of copies of data that you have already processed when the original still exists? What is sinister about refusing to release data that you do not have the legal right to disseminate?

Your last claim I assume is something about the Yamal analyses, but I'm not sure because none was based on 3 trees. The original analysis used 241 from a total of about 2000 samples. Other analyses used various subsamples of those 2000. The Briffa paper McIntyre raised a huge stink about claimed to use 611 trees. McIntyre claims that the actual value couldn't have been more than about 250 (though he also originally claimed only 10 trees were used), but the values in the graph indicate that the actual number was almost definitely at least 200, which is plenty to get a statistically meaningful result.

There are lots of good reasons to exclude trees and they have nothing to do with confirming a pre-determined conclusion. Arbitrarily ditching most of the dataset so you can replace it with one of a different size and of unknown quality isn't one of them. First of all, once you get beyond a certain sample size, additional samples have very little effect on the result, so it doesn't make much sense to waste the time and money processing a lot more samples. Second, any trees that have been damaged by fire, wind, lightning, stripped bark, etc. may have missing or damaged rings. You also don't have a single tree that spans 2000 years. You get the series by matching up overlapping dates from multiple trees. Not all trees will have identifiable overlaps with other trees, so they can't be placed in the series and you have to toss them. Last but not least, not all trees make good proxies because their growth doesn't always correlate well with temperature. If that's the case, you can't use them, at least for the period where there is no correlation. That's the "divergence problem" with tree ring proxies.

Which brings us to "the decline" that was "hidden." Just from the context of the email alone it's clear that it's talking about the divergence problem. First of all it says that the real temps were added to the end of each series, which should be a dead giveaway that "the decline" was not in temperature, otherwise adding in real temps would not hide it. Also, we know from the other instrumental and satellite records that temps didn't decline from 1961-1999. As it turns out, "Keith" would be Keith Briffa, the guy who published a paper in Nature a year earlier showing the divergence problem, wherein from 1961 onward, the data from SOME trees indicated a decline in temperature, while most other trees, other proxies, the instrumental record, and the satellite record indicated continued warming. Not surprisingly, most of the leaked data files concerning processing of tree rings say things within the code such as- "Specify period over which to compute the regressions (stop in 1960 to avoid the decline)." Again, if you're trying to cover something up, a paper in Nature isn't a great way to do that and you probably shouldn't comment about it repeatedly in your code.

The paper:
http://www.nature.com/nature/journal/v391/n6668/abs/391678a0.html

I just ran a search on ISI for "climate, tree, divergence" and got 79 hits over the past 10 years, most of which seem to discuss "the decline" as well, so like the rest of the "proof" of a coverup it doesn't seem to be a very well-hidden either.

The bottom line though is that none of the emails change any of the science on the issue and there is nothing in them to show scientific misconduct. Are some of them rude? Absolutely, but as scientists we aren't payed to be nice and we certainly aren't paid to indulge the whims of idiots. Probably most telling is the fact that even when they think they're having a private conversation they STILL think McIntyre and the like are annoying idiots. They STILL complain about the efforts to misrepresent their work. They STILL think a lot of the "skeptical" papers are crap. They aren't saying "McIntyre is too smart. He's onto our fraud" or "I can't believe they buy this hoax" or "We need to make this scarier so we can get more money."
 
Last edited:
i think some people won't believe global warming is fact until the east coast is flooded and humanity is doomed..probably the same individuals who believe "god" created the earth 2000 years ago. in any case i salute greenbean for taking the time to explain simple concepts to ignorant people.... keep driving your SUV"s!
 
That is exactly the kind of attitude that makes me not want to listen to the global warming "freak show".

Greenbean makes good points in a coherent and emotionless manner. That is why I was hoping he would respond. He did, and thank you!

I, for one, do not believe that global warming is man made. I do believe that the earth's climate is cyclical. There is a ton of good, irrefutable evidence to back that up. I would also agree that man MAY have an impact on global warming, but is not the cause. I do believe that we should be responsible, clean up after ourselves and develop green energy. NOT HAVE IT CRAMMED DOWN OUR THROATS!!!

I am very skeptical of the "models" that the environmentalists all swear are 100% accurate. How many times in the past have these same environmentalists been wrong? In my lifetime, I was going to freeze in the next Ice Age, get sunburned because the aerosol cans were destroying the ozone (hole is still there, seasonally, and the aerosol cans are gone) and now I'm going to drown due to global warming. I was also supposed to die a couple of years ago due to the sudden increase of hurricanes. Still here. Oh yeah, the overall polar bear population is increasing, but they are all going to die?!

Previously in this thread I tried to point out that the modeling is not 100% correct and there are a ton of variables that can be manipulated that all change the end result. I referenced the hurricane models. I live in SW Florida. In the beginning, Hurricane Katrina was supposed to move across the west coast of Cuba and come knockin on my door as a Cat. 1, at best. We all know that didn't happen. Charlie was never supposed to make landfall that far south and never was supposed to be a Cat 4. The track on Wilma was very accurate, but the storm was supposed to make landfall as a tropical storm or a very weak Cat 1. I'm 30 miles inland and about 60 miles from where the Cat 4 made landfall and I had Cat 2 winds. So, modeling is not infallable!!!!! FWIW, in my office, we call them "spaghetti models". When they are all plotted on the same chart, they all substantially agree the first couple of days. Yet, at 5 days or more out, they look like a plate of spaghetti. The projected paths are all over the place.

The other reason I use these models is quite simple. At the end of every season, the data can be analyzed and the models can be tweaked based on what was learned. Better, but still miss an awful lot. Climate models that predict global warming do not have the repetetive data to make any type of comparison. Global warming has happened in the past (without our help) but the scientific data used in the models is forensic in nature. No actual temps, humidity, wind patterns, etc. All assumptions or educated guesses.
 
Oh, and I will keep driving and enjoying my SUV, my hemi powered, gas hog, pick up truck and my mustang. I sure can't wait til my other mustang is restored so I can start driving it!
 
It's like the banning of DDT. Junk science packaged as "saving" the world actually cost tens of millions of lives, many children.

How many millions of people will die as a result of the lost resources to fight man-made global warming?

How many ALREADY DID DIE, when ethanol raised the price of corn around the globe. Meanwhile, it massively increased the DEAD ZONE in the Gulf of Mexico, and even the state of California now says Ethanol causes more emissions than fossil fuels.

Of course, 40 years after the banning of DDT, the scientific community reversed their opinion too.

Lord Christopher Monckton speaking on global warming, associated policies, and their impact on freedom, democracy, and national soveriegnty.
http://www.youtube.com/watch?v=stij8sUybx0
 
Ethanol was never about reducing emissions, it was and is all about corn subsidies, which leads to politics and therefore something we can't really discuss there. I don't think there is anyone in this thread, on either side, that thinks current ethanol production for fuel is a good thing.

As for Monckton, he's got zero scientific credentials and a tinfoil hat. He has no more creditability on this issue than any random person off the street.
 
Status
Not open for further replies.
Back
Top