Wednesday, December 16, 2015

Okay, here's another modest proposal


You know if you’ve had more than a cursory look at this blog that I am some way along the ‘sceptical’ continuum as far as Anthropogenic Global Warming goes. But one thing I have learned is that it is not enough to complain about things, one ought to make constructive suggestions. So here goes.

If we were serious about (a) proving cause and effect as far as Anthropogenic Global Warming is concerned; and (b) trying to stop it, there is one bold action I would happily get behind.

This would be grounding all the world’s aircraft for a few years. Take all the money currently being spent on various carbon trading schemes and bureaucracies and uneconomic renewable energy schemes and give it to the airlines to pay them to mothball their machines and pay their staff to sit around doing nothing. 

The rationale for this is two-fold:

1. We have a pretty good idea anthropogenic cirrus clouds from aircraft have a significant warming effect. And the warming observed over the past half-century is localised most strongly where these anthropogenic clouds are: in the northern hemisphere, not the south; and over continents, not oceans. 

2. All greenhouse gases are not equal. When I drive my car down the Princes Highway past the towering eucalypts of Royal National Park, I know that the water and carbon dioxide bands in the atmosphere at ground level are pretty nearly saturated, so the emissions of my car will not make a great deal of difference to how much additional infrared energy is absorbed. And I know also that my car’s emissions are not going to stay in the atmosphere for long, because those aforesaid towering eucalypts and other green plants are going to enthusiastically suck them up. When I fly down to Sydney, though, it worries me. The plane I’m riding is spewing carbon dioxide and water vapour out into a part of the atmosphere that doesn’t have a lot in it already, a long long way away from any plants that can use them.

I’ve made these arguments before on this blog, but not recently. So I figured it was time for some repetition.

Stopping aircraft for a few years should give a very good idea what proportion of the observed warming is due to anthropogenic cloud and emissions of greenhouse gases at altitude, and hence whether carbon dioxide emissions per se are worth stressing over.

I think the economics of this are solid. The maximum annual profit airlines have made recently seems to be of order $30 billion, and there probably aren’t more than 4 million people who would need to be paid to do nothing, as opposed to being swapped immediately to productive jobs elsewhere in the newly frisky sea and rail freight sectors. So maybe another $100 billion paid to them. That’s less than we’ve been spending in silly ways in recent years, I’m pretty sure.

Tuesday, December 15, 2015

Disintermediation


Historically, scientific journals got started as a way to share information. They were the most effective ways to tell other researchers what you were doing and find out what they were doing. Nowadays, they aren’t really. They are complicated intermediaries – which add some value, true – between you and the people you want to share information with. They have rules which are largely arbitrary and which impact negatively on how useful  they are as media for sharing information with people – rules about how long a paper should be, about how it should be structured, about what should be put in and what should be left out. More importantly, their primary function nowadays is not to share information, but to score points in The Academia GameTM, one of the first and most dramatic successes of the ‘gamification’ craze.
Anyhow, I think the search engines we have nowadays are good enough to cope with a bit of disintermediation; so I thought I would have a go at sharing my information here instead. Some of it, anyways. I’ve got a piece of work, you see, that I can’t see scoring any points, and I want to tell you about it.

Back in my PhD I came across a paper on work done by two researchers at the University of Texas El Paso in the 1970s, Wang and Cabaness. In this paper they had investigated the copolymerisation of acrylic acid (AA) and acrylamide (AAm)  in the presence of a number of Lewis Acids of general formula XCl4, and reported that tin tetrachloride could induce the formation of a copolymer with the regular repeating formula ((AA)4(AAm)) – four acrylic acid residues in a row, followed by an acrylamide unit, rinse repeat – which they attributed to a 1:1 alternating copolymerisation of a SnCl4(AA)4 complex and acrylamide.
Structure postulated by Cabaness and Wang 1975
I was intrigued by this article, because I was also studying polymerisation with Lewis Acids (only alkyl aluminium chlorides rather than the XCl4 species), in systems where we got 1:1 alternating copolymers of donor monomers (like styrene, butadiene, vinyl acetate, or acenaphthalene) with acceptor monomers (like acrylates, methacrylates, and oh yes, acrylamides and methacrylamides). The whole basis of our understanding of these systems was that complexes of acceptor monomers with Lewis Acids made them fantastically better at being acceptor monomers, and I couldn’t see how four monomers that were all complexed to a single Lewis Acid ought to polymerise together: if they were better acceptors, they would be less likely to polymerise with each other, and once one added onto a growing polymer chain it seemed to me that it would be more likely to add a (relatively donor-ish) free acrylamide rather than one of its fellow complexed acrylic acids that was probably held in a sterically unfavourable position, as well as an electronically unfavourable condition. And 4:1 regular polymers had never been reported with acrylic acid and any more conventional donor monomers that would be more likely to behave themselves. It was all very mysterious. Nobody had ever confirmed or followed up on this work of Wang and Cabaness; which was disappointing, but not very surprising, because the whole field of playing with various Lewis Acids and donor and acceptor monomers had been a big thing over approximately the years 1968-1975 and had then petered out for no good reason.


So many years later I found a bottle of SnCl4 lying around (it’s a liquid; it comes in bottles – the Sn(IV)-Cl bond has a lot of covalent character) and remembered this paper and thought I would have a go. Wang and Cabaness had only looked at their polymers using elemental analysis, which realistically tells you about 2/10 of not very much about polymer structure, whereas I had spent quite a lot of time looking for regularity in polymer sequences using Nuclear Magnetic Resonance Spectroscopy, which tells you an awful lot, and I thought I would make the polymers they made and have a look at them with NMR. The proton NMR spectra of polymers are always broad, and the backbone protons of acrylic acid and acrylamide residues (as you can guess from their structures) end up on top of each other in a messy way.  So the way to tell what is what is to do carbon-13 NMR spectra, which gives you nicely resolved peaks in the carbonyl region, and reasonably well resolved peaks for the methine carbons.  If you look at the carbonyl region of a copolymer of AA and AAm, you can pick out the six different nearest-neighbour environments quite nicely. In the figures below, for example, you can see clearly how base hydrolysis of PAAm generates isolated AA units on the chain, which have a protective effect on neighbouring AAm and make it much less likely that they will be hydrolysed.
Carbonyl (left) and methine (right) regions of  13C-NMR spectra of polyacrylamide at different levels of base hydrolysis. From: Yasuda K, Okajima K, Kamide K. Study on alkaline hydrolysis of polyacrylamide by carbon-13 NMR. Polym J (Tokyo) 1988:20(12):1101-1107.
Carbonyl region of 13C-NMR spectra of AA-AAm copolymers. From: Candau F, Zekhnini Z, Heatley F. I3C NMR Study of the Sequence Distribution of Poly(acrylamide-co-sodium acrylates) Prepared in Inverse Microemulsions. Macromolecules 1986:19:1895-1902.
 Now, when I got back and had a look at the paper again, I was troubled by the times given by Wang and Cabaness for these reactions. These sort of radical reactions usually have an inhibition period when nothing much happens at the beginning, even if you take pains to get rid of dissolved oxygen from the system first. The reported reactions were done under nitrogen. So, if you bubble nitrogen through a reaction mixture enough to get rid of oxygen, to do a halfway decent job you need to do it a lot longer than the 100 s or less quoted for these reactions. So you would have to bubble nitrogen through at room temperature, then shift to a higher temperature, at which it would take a lot longer than 100 s to warm up to the quoted temperature values. Maybe the times quoted were the time it took after the inhibition period finished, but before the reactions were quenched? Which would mean Wang and Cabaness would have had to have been watching their reactions like hawks, and even so quoting reaction times to a precision of one second was a bit ridiculous. So, anyhow, I resolved to cut the temperature down to 60 °C to give me a bit more time to work with and quote reaction times in minutes rather than seconds.
Table 3 - the tin tetrachloride data - from Wang and Cabaness 

I first tried doing what I would usually do, which is freeze/thaw degassing I still had a significant inhibition period after I put the samples in a 60 °C oil bath. When anything happened, it happened too fast for me to stop it, and what it was, was my polymer ‘popcorning’ into a solid mass. This is something that happens with monomers that polymerise very quickly. Polymer solutions are viscous, and transfer heat worse the more viscous they get.  Polymerisation reactions are exothermic. So, a polymerisation that goes quickly generates a lot of heat and a viscous solutions which makes it difficult for the heat to dissipate, and as the temperature increases the reaction goes faster, making it even hotter and more viscous, and you get very quickly to a temperature where your solvent vaporises, and your polymer chars, and if you are doing a reaction in a 20 tonne reactor instead of a tiny little tube you ring the insurance company. That is what happened to my first attempts: they polymerised into intractable masses that I couldn’t get out of my reaction vessels without smashing them and then wouldn’t dissolve once I’d smashed them out.
I decided to drop the freeze/thaw degassing and use the dodgier ‘sparge with nitrogen’ method in round bottomed flasks that would be (should be) easier to get the polymer out of.

Yes, I could get the polymer out without smashing anything. But the reactions that formed it were the same: the vessel sat there for a while without doing anything, then suddenly there was the popcorny noise of solvent vaporising and insoluble masses with yields of approximately 100%.
I cut the temperature a bit further, and cut the concentration of everything a bit, and still couldn’t get any useful polymer. 

Then my colleague Daniel Keddie suggested something that made a lot of sense which I should have thought of a long time before: why not put in a chain transfer agent? This is something that cuts the length of the polymer chains but shouldn’t (knock wood) have any more dramatic effects on the chemistry of the reaction: it just introduces a ‘jump to a new chain’ step that replaces some of the propagation steps. So I put in enough butanethiol to reduce the degree of polymerisation to about 25 and had another go. And I got polymers I could remove from round bottom flask, which actually dissolved up okay! These proceeded to a conversion of about 40-60% when heated at 60 °C for 30 minutes – again, almost all of this time was inhibition period, so I couldn’t actually stop any of the reactions at a low enough conversion to get an unambiguous relation between the composition of the polymer and the feed composition of the monomers.
Following the next step of the procedure – dissolving in water and reprecipitating in methanol – was a little tricky, because the polymers were reluctant to dissolve in plain water. So I added a bit of base to convert the acrylic acid residues to sodium acrylate – and crossed my finger and kept the temperature low to avoid hydrolysing any of the acrylamide residues to sodium acrylate. And I managed to get some halfway decent carbon-13 NMR by running these overnight, like so:

 
Carbonyl region of 13C NMR spectra of AA:AAm:SnCl4 polymers made by lil' ole me
 
Now the area of these peaks should be pretty much proportional to the amount of carbon contributing, so what you can tell straight away from these results is that the products  don’t have a constant  4:1 AA:AAm composition, so the mysterious result of Wang and Cabaness is artefactual.
The spectra aren’t similar enough to be of 1:1 alternating copolymers, either – there is always excess AA, and the main AA peak always starts out as  the one with one AA and one AAm neighbour. If I was getting a 1:1 alternating copolymer and then hydrolysis, I would expect to see significant amounts of AA with two AA neighbours coming in as soon as the AA with two AAm neighbours started to disappear.

Besides these negative results, though, there isn’t a lot that I can say. The tin tetrachloride is doing something: it is making the reaction go a lot faster than it would.  It *might* be encouraging a tendency towards alternation. Because of composition drift, though, and because of the uncertainty in the literature reactivity ratios, I can’t tell for sure whether there is any shift in the polymer composition compared to what I would see without the tin tetrachloride.  It looks to me like I am getting a significant amounts of base hydrolysis of any acrylamide residues which aren’t alternating – which is what you would expect from the literature.
In order to publish this, I would need to make sure my 13C NMR was quantitative, which wouldn’t be too hard. I would also need to work out a way to kill my reactions quickly, and I would need to figure out a way to reprecipitate the polymers without hydrolysing the acrylamide residues. Obviously these are soluble problems, but this system isn’t doing the really exciting thing it was reported as possibly doing, and doesn’t appear to be doing anything moderately exciting, so I don’t know if it is worth carrying on with.

 
Conclusions:

1)      I have no idea how Wang and Cabaness got something out their system that they could dissolve and reprecipitate and find viscosities of.

2)      There is no evidence that the 4:1 acrylic acid: acrylamide composition they report can be reproduced.

3)      People probably tried to repeat their work before, and came to the same conclusion, but didn’t put it on the internet.
 

Wednesday, July 1, 2015

One may be regarded as a misfortune; two begins to look like carelessness.

This is a post not so much about planetary science, as about the sociology of science. If you have been following recent developments in the astronomy of the solar system at all, you will know that the comet 67P-Churyumov-Gerasimenko has a funny shape. The standard working explanation for this is that it has a funny shape because it has been formed from two comets that have collided and stuck together, a ‘contact binary’, and explanations that ascribe its shape to deformation or ablation of an initially more spherical object are marginalised.

I have not understood why, but now I think I do.

First of all, I should explain that my biases are all the other way around. I would be inclined to go through the most elaborate mental gymnastics to avoid having to suppose a body was formed  from two things running into each other. This is based on two things: firstly, common to all people who do not get their picture of space primarily from space opera, my perception of space is of someplace that is very big and very empty where things move very fast. I do not expect to be hit by a comet any time soon. If I am, I expect the relative difference in our velocities to be such that we will not end up as a ‘contact binary’, but as a mass of superheated comet and minor scientist-derived dust. Secondly, in my particular case, I am used to thinking about chemical reactions, and I know that almost all collisions of molecular bodies in the gas phase do not end up with them sticking together; without a third body to take excess kinetic energy away, they are too energetic to stick together, and fly apart again. I know very well that comets are not very much like molecules, and there is no reason to behave in exactly the same way, but that is the intuitive bias I have about things running into each other in a big empty space: chances are that if they not smashed to flinders they will bounce off one another, and never come within coo-ee of each other ever again.

I have been drawn inexorably into thinking about comets by Andrew Cooper and Marco Parigi, and I interpret the shape of 67P-Churyumov-Gerasimenko as they do, in terms of the ‘Malteser’ model I have outlined below: a brittle, dense crust has developed on the comet surrounding a deformable core, and at some time in the past this crust has ruptured and the gooey interior has stretched. If this were true, unless it happened a very long time ago indeed, the neck of the comet should be less depleted in volatiles, and that is where most of the water sublimation should be coming from. This seems to be the case.
67P-Churyumov-Gerasimenko, being awesome.

Yesterday I found out, being a slow newcomer in the comet world, that 67P-Churyumov-Gerasimenko is not the only comet with a funny shape. There is 103P-Hartley 2. It has a relatively smooth neck, and this neck is also associated with direct sublimation of water, as would be expected if it had a fresher surface that had been exposed for a shorter time.

103P-Hartley2. The bright light from the rough end is coming predominantly from chunks breaking off a frangible surface and subliming, if I have understood correctly.
 
And – ahem – it was also suggested that 103P-Hartley 2 is a contact binary. To quote Lori Feaga and Mike A’hearn of the University of Maryland*, 8th October 2011: "The heterogeneity between lobes is most likely due to compositional differences in the originally accreted material. We are speculating that this means that the two lobes of the comet formed in different places in the Solar System.  They came together in a gradual collision and the central part of the dog-bone was in-filled with dust and ice from the debris."

I might be prepared to grant one contact binary, since the universe is a strange place, and all manner of things can happen. But two from such a small sample size? I don’t think so.

But as I said at the beginning, this isn’t supposed to be a post about planetary science. It finally struck me yesterday why so many people would cling to the idea that funny comets are formed by bodies sticking together.

Paper after paper refers to comets as ‘primitive’; relics of the primordial Ur-cloud out of which the solar system accreted, from which we can learn about the nature of the Ur-cloud. If comets are not primitive – if they have suffered all sorts of physical and chemical transformations as a result of repeated annealing by swinging by the sun – then they don’t tell us anything about the Ur-cloud.

I think a lot of people who study comets are very interested in the Ur-cloud. They got into comets in the first place as the best way they could approach this beast, whose nature is so important for understanding the overall history of the solar system and all those other solar systems out there. They are not necessarily interested in comets as ends in themselves. The contact binary model is the one way that observed heterogeneous, oddly-shaped and oddly-behaved comets can be reconciled with observable comets as primordial bodies. So, if you are interested in the antiquity of the solar system, what are you going to do? Accept that ‘you can’t get there from here’, or grab at this improbable lifeline that leaves the road unblocked?
 
*: In the interests of full disclosure, I was once a resident of Maryland, when my father was in the army. We moved back to Arizona when I was 2.

Tuesday, June 30, 2015

Questions Nobody is Asking, #2137 in a series

My Google skills are failing me.

I've tried a couple of times recently to find an answer to the question, 'How does Google Scholar decide what order to list your co-authors in?'

Clearly, it's not alphabetical.


It doesn't seem to be based on any of the six indices that are quoted for each academic: not total publications, or publications since year x, or h-index, or h-index since year x, or i10-index, or i10-index since year x.


It doesn't seem to depend on *when* we published together.



Or when my co-authors took ownership of their Google Scholar profile.


Or how often we have published together, or how well the publications we have *together* have been cited.


It doesn't fit with any indices divided by years since first publication, or when a co-author first published, or any simple combination of any of the above that I can think of.

The one thing I can be sure of is that it is linked to individual papers somehow,
 
And I was about to say, aha, it is something to do with the impact factor of where we first published together, since my last two co-authors I am only on conference papers or book chapters with; but that isn't going to work either.
 
 
 
I dunno. I expect it is some complex proprietary multi-factorial algorithm. Names never seem to move once they are in the list, so maybe it would make sense if I looked at the values of some parameter or other at the time a co-author was added to the list. Or else my basic observation skills and Google skills are equally weak lately and I need a holiday to be fit for mental work. Or - maybe it is totally random, so no one has to feel bad about being everybody's last listed co-author. 


Monday, March 16, 2015

Another Brief Statement of the Obvious


There are four chief obstacles to grasping truth, which hinder every man, however learned, and scarcely allow anyone to win a clear title to knowledge: namely, submission to faulty and unworthy authority, influence of custom, popular prejudice, and concealment of our own ignorance accompanied by ostentatious display of our knowledge.

                        - Roger Bacon (1266)

Wednesday, March 11, 2015

A naive look at comets

Okay, the relentless drip-drip-drip of Marco's comet obsession has finally gotten to me. I said I was going to post something about how you could get a mass distribution in comets that was consistent with the 'brittle crust, gooey low-tensile strength centre' required for the observed morphology of 67P-Churyumov-Gerasimenko.

So here it is.

I’m going to assume we start with a homogeneous mass containing all the things that will be solid grains out at a temperature of very few K and that have been shown to be in comets. So sodium and magnesium silicates, and ammonia, and methanol, and carbon dioxide, and carbon monoxide, and water, and methane – and importantly some higher hydrocarbons, which haven’t showed up in the gas from 67P-Churyumov-Gerasimenko but seem to be attested in other systems. I guess the primordial assumption behind this is that comets formed from the outer edges of the big solar system Ur-cloud, so weren’t subjected to much in the way of fractionation processes beforehand. If this is some terrible error in cometology, please let me know.
We roll all these things into a ball using gravityTM and hurl it towards the sun.
 
As it gets warmer on the surface stuff that is most volatile, like methane, will sublime first, and leave behind a very fluffy residue of stuff that doesn’t want to sublime as easily.

 
Some of this fluffy stuff will flake off and float away; probably a lot; but it will also invariably compact as bits flake off and lodge with other bits, and at temperatures where higher organics are not volatile but the small molar mass stuff is, these latter will end up concentrated on the surface where they can soak up all those lovely cosmic rays and start forming larger crosslinked molecules by radical-radical recombination.
Okay, if the mass of the comet  is small this evaporation process will just continue until you are left with a relatively dense mass of non-volatiles. But if the mass is bigger, this surface layer – which is gradually getting more compact and more enriched in high molar mass organics as we approach the sun – will start acting as a barrier to the progress of gas molecules from the warming interior.
And eventually, I can imagine this means there won’t be vacuum at the interior surfaces where sublimation is occurring, but appreciable pressure. There seem to be abundant studies in the literature for gas building up in comets.



Now, once the pressure and temperature increase above the triple point of any of the constituents of the comet interior, there will be the possibility for liquid to exist there.
I’ve plotted the triple points of some constituents of comets below.


(Note, if you are unfamiliar with triple points, that these are the bottom points of triangular areas where the liquid state can exist. The exact phase diagrams of things are much more difficult to find than the triple points and may not have been mapped out in detail for all of these.)
The species that are going to be liquid at relatively low temperatures and pressures are relatively minor constituents of comets, of the order of a few percent in total (probably), so we won’t get a gooey liquid core, but a ‘moist’ core with liquid on the surfaces of solid grains, like a fairly dry soil. This will happen at temperatures and pressures far below what we would need to liquefy water, and these liquids will persist in the core as more volatile components turn into gas and bubble out, while the shape of the comet is maintained by the outer layer of non-volatiles gummed together by organic polymer.


Now, when the comet flies out into the cold again, the interior part of this liquid core is likely to freeze before it evaporates, leaving the core of the comet no longer a loose aggregation of stuff, but particles of relatively non-volatile components sintered together with a layer of frozen hydrocarbons. This will still have a low tensile strength, but I would think this sintering would significantly improve its compressive strength. And the different levels of degassing possible from a fairly rigid external shell would fit the great variability in comet densities observed.  There may also be interesting fractionation of one component freezing before another, depending on how good the outer layer is at retarding heat and mass transfer from the interior. How hot can it get inside? How high can the pressure get? I guess I should look up people's best guesses for those questions. I can imagine the shell getting stronger and stronger on repeated passes of the sun if it doesn't break, sustaining higher pressures inside, and eventually allowing the same sintered morphology to be reproduced with water ice holding inorganic particles together.




So, in summary, comets should end up kind of like MaltesersTM.

Saturday, February 14, 2015

Define. Clarify. Repeat.



Being A New Post for Marco to Hook Comments On

Like most people with an interest in the historical sciences, Alfred Wegener is one of my heroes. He marshalled an overwhelmingly convincing mass of evidence of the need for continental drift, argued his case coherently and courageously against monolithic opposition, and was eventually vindicated long after he disappeared on a macho scientific expedition across the Greenland ice cap.

Like most people who have thought about it seriously, the pillars of the scientific establishment who mocked Alfred Wegener are also my heroes. Because no matter how much evidence there is of the need for a new theory, you can’t throw out the old theory until you have a new theory. And for a new theory to be science, it needs to have a plausible mechanism. And in the case of continental drift, there was no proposal for how it could possibly have happened that was not obviously wrong until the mid-ocean ridges were discovered, long after Wegener’s death.

To generalise: if you are an iconoclast who wants to convince me to change my mind about something scientific, you need to do two things. (1) Present an overwhelming mass of evidence that the existing models are inadequate: there has to be something that needs to be explained. (2) Present some vaguely plausible model, consistent with the other things we know, that explains this stuff that needs to be explained.

 The rest of this post is just going to be me arguing with Marco, so I’ll see the rest of you later. :)

***
From the most recent comment of Marco down on the ‘Yes,  Natural Selection on Random Mutations is Sufficient’ post:

I'm just going to summarise what I believe to be the source of our differences.

It does not make sense to talk about the source of our differences. You have not yet clarified your model sufficiently for it to be meaningful to talk about the differences between us. As the iconoclast, you need to present overwhelming evidence that the existing model needs to be changed, and some plausible mechanism for an alternative model. These are both necessary. Reiterating that you see the need for a change, and advancing very vague mechanisms that are not linked to the known facts of molecular biology, are never going to convince me. Of course there is no need for you to convince me; but if you want to convince anyone in the scientific community, these are the two things that you need to do. 

1) Experimental basis - your mentioning that a synthesis cannot be experimentally derived by definition, is to me an admission that it is not strictly science. You do believe it to be science by a reasoning I do not accept.

The only way I can construe this statement is that the historical sciences in toto are not science. To me, this is an unreasonable limitation of the meaning of ‘science’. It impacts all possible mechanisms of evolution.

2) expanding informatics principles to genetics - genes as a store of information and instructions analogous to a computer algorithm. To me it is obvious and valid - to you (and biologists in general) it is a big no-no.

This is because biologists know more than you do. The relationship of genes to computer algorithms is only an analogy, and it is a very weak analogy.

3) definition of randomness - Ditching a statistically valid definition of random in favour of a statutory functional one makes the synthesis *not falsifiable* in the statistical sense. One should be able to verify that a simulation based on statistical randomness would come to the same probability distribution.
4) Dismissal of trivial non-randomness. You appear to do this equally for biochemical reasons that mutations would happen in certain areas more than others that are not proportional in any way to the function, but also it appears for things like horizontal gene transfer and epigenetics effects. To me it is an admission that random is incorrect even in your functional description. For instance, I think it is as reasonable a synthesis that horizontal gene transfer explains the spread of all beneficial mutations. I do not think that it is the whole story, but the standard evolutionary synthesis crowds out other ideas as if it had been experimentally verified.

I am absolutely sure that a simulation based on statistical randomness  could show natural selection on random variations was sufficient to account for biological change. I am absolutely sure that a simulation based on trival non-randomness could show natural selection on what I call trivially non-random variations was sufficient to account for biological change. Alternatively, I am sure that a simulation based on statistical randomness could show natural selection on random variations was not sufficient to account for biological change. None of these simulations would necessarily tell us anything about reality. The real system that we are trying to simulate is too complicated. Modelling is not experiment. All models are only as good as their assumptions. This quibbling about definitions of randomness is, to me, irrelevant and uninteresting. It does not get us one step closer towards identifying a deficiency in the standard model, nor clarifying a plausible mechanism for directed evolution.

Monday, September 15, 2014

The Irish Elk is not a good role model for a Higher Education sector

Professor Young said international rankings that show no Australian university in the top 100 for research citations demonstrate the need for dramatic higher education reform.”


No, international rankings in research citations demonstrate nothing.


High research citations are the result of working in important areas that lots of other people are working on. 

The most highly cited results in these important areas that lots of people are working will come from the scientists with the best toys.

We cannot afford the best toys.


I just did a bit of exploring on the internet, and in 2012 the Australian Commonwealth government spent about $850 million dollars on the Australian Research Council, which is the main funding source of curiosity-driven research in this country. In the same year, Samsung spent $42 billion dollars on research and development. There is no way you can expect Australian researchers who are interested in making conducting polymers for flat screen displays, for instance, to compete with that sort of funding. There is no point in the Australian Commonwealth funding work that is trying to compete with the work Samsung is doing, or with the funding priorities of the Max Planck Institute, or of the strategic goals of the California State University system. There are lots more foreigners than us and they have a lot more money. They can afford better toys.

We should spend our limited resources on research that is important to Australia, but less-so to the rest of the world. With things that are of value to the citizens of the Commonwealth of Australia, but that the big fish in the pond can’t be bothered with. There is no reason to suppose that the results of this work will be in topics that are flavour-of-the-decade overseas. If the funding mechanisms of public universities are doing their work properly, they won’t be.

An Australian university making the top 100 list would truly be a great tragedy.  It will mean that Australian problems are being ignored and universities working on problems relevant to our country are being starved of funding. If it ever happens, I will burn Professor Young in effigy and drown my sorrows in a whole lot of soju.

Wednesday, September 10, 2014

I know what you want to see: some half-arsed Climate Modelling!

The graph I showed in the last post wasn’t very good evidence for anthropogenic global warming. If I wanted to scare you, I would show you this graph instead.

This shows the correlation between carbon dioxide and temperature found in a brilliant set of data collected from ice cores at Vostok, Antarctica, where ‘0’ on the temperature axis is the average temperature for the last century or so. [Attribution to text files of raw data: J. R. Petit, J.M. Barnola, D. Raynaud, C. Lorius, Laboratoire de Glaciologie et de Geophysique de l'Environnement 38402 Saint Martin d'Heres Cedex, France; N. I. Barkov, Arctic and Antarctic Research Institute, Beringa Street 38, St. Petersburg 199226, Russia; J. Jouzel , G. Delaygue, Laboratoire des Sciences du Climat et de l'Environment (UMR CEA/CNRS 1572) 91191 Gif-sur-Yvette Cedex, France; V. M. Kotlyakov, Institute of Geography, Staromonetny, per 29, Moscow 109017, Russia]

I came to this data because I wanted to have a closer look at an assertion I have come across a number of times, that changes in carbon dioxide lag changes in temperature in ice core measurements. And yes, it does seem to, but it is a very unwise thing to base a full-blooded skepticism to global warming on. Because the lag is smaller than the uncertainty in the data. The age of the ice, and the age of the air trapped in the ice, is not the same: there is a difference of about 3000 years between the age of the trapped air and the age of the ice, which isn’t known with absolute accuracy, because it takes time for the snow and ice above a little bubble of air to be compact and impermeable enough to trap it there for good. The carbon dioxide content is obviously calculated from the air, while the temperature is calculated from the isotopic ratio of deuterium to hydrogen in the ice molecules. And the imprecision in aligning the exact times of these two sets of data is larger than the lag values that have been reported. It would be nice if this data gave a definitive answer as to how closely carbon dioxide and temperature changes track one another, but all we can say is that on a time scale of +/- 1000 years or so they move simultaneously.  I could just let you draw a line through the data extrapolating to the 400 ppm of carbon dioxide we have today, but I will do it myself.

This is a fit to the data assuming that all the change in temperature is due to radiative forcing by carbon dioxide, fixing T = 0 as 286 K and 284 ppm CO2, with the log of the concentration change giving a change in absorption which has to be compensated by increasing the temperature of a black body radiator, with one adjustable parameter (an invariant non-CO2 radiative forcing) adjusted to minimise the sum of the square of the differences between the fitted curve and the experimental data.

Scary, eh?

If this is the correct way to extrapolate the data, then we are about 6 degrees cooler than we should be, and are just in some sort of lag period - of some unknown length, but definitely less than a thousand years - waiting for this to happen.

I was on the brink of converting myself to global warming alarmism, but I thought I should have a look at the original papers first. Here are some great graphs from Petit et al., 'Climate and atmospheric history of the past 420,000 years from the Vostok ice core', Antarctica. Nature 399: 429-436.


Carbon dioxide is not the only thing that is correlated with temperature changes. Methane, another greenhouse gas, is correlated with temperature changes. (They did the maths in the paper, and r2 is 0.71 for CO2 and 0.73 for CH4). The temperature changes are also closely correlated with the predicted insolation – the amount of sunlight incident on the Earth, varying according to irregularities in its orbit. Dust and sodium (a proxy for aerosols, which we know are cooling) are negatively correlated with temperature changes (r2 is 0.70 for sodium). Ice volume (which is a proxy for water vapour, another powerful greenhouse gas) is positively correlated with temperature.

While insolation can only be a cause of warming, all of these other correlating things can be both a cause and an effect of increasing global temperature. We do not know, just by looking at this data, what is what. A sudden fall in dust and sodium, an increase in ice volume, and a sudden rise in CO2 and CH4 characterises the onset of all of the interglacial warm periods covered in this data. In the graph below I’ve fit the data again, but this time instead of adding an invariant radiative forcing by other things term have multiplied the carbon dioxide radiative forcing term by an adjustable constant to approximate the effect of all the other variables that are changing in synch with carbon dioxide. This constant turned out to be ‘41’ for the best fit, shown. So using this very crude fit, I can extrapolate the effects of *just* changing carbon dioxide concentration to 400 ppm, without any of those other things changing.(That's the line of green triangles hugging the axis from 300 to 410 ppm). This result seems absurdly Pollyanna-ish, even to me, and I'm sure I could make it looks scarier with a model for the experimental data with more adjustable parameters: but that's what the 'suck it and see' model gives me.
I've also put in the observed changes in modern times on this graph. It does make sense to attribute these to CO2 with a little help from the other greenhouse gases we've been putting into the atmosphere. And because we're extrapolating beyond the bounds of the historical data, we may be in a strange and uncharted perturbation of the global climate system. So maybe there is still a significant lag for us to catch up with. Maybe.

But there is one other thing that emerges from this ice core data that suggests very strongly that carbon dioxide concentrations are much more an effect than a cause of global warming. Have another look at this figure:

At the end of each interglacial period, the temperature drops before the carbon dioxide concentration does. This is not a minor effect lost in the uncertainty, like the possible lag in carbon dioxide concentration at the beginning of warming periods; it is a big lag of many thousands of years. Insolation and methane don't behave like that: they rise and fall in lockstep with temperature. What this tells me is that carbon dioxide has historically not been sufficient, by itself, to maintain a warming trend. So we can completely discount any panic-mongering positive feedbacks.

Thursday, September 4, 2014

Slicing Idea Space



The other day I became aware of a publication attacking the group Friends of Science in Medicine – of which friends I am one – published in the minor journal ‘Creative Approaches to Research’, taking Friends of Science in Medicine to task for unfairly attacking complementary medicine. As the publication originated in my own institution, I felt an obligation to respond. So I wrote a vituperative response.

However, the process of writing the response has crystallised something in my mind which I think is valuable. This something is the distinction between evidence-based and science-based policy. The main coherent argument of the published attack on Friends of Science in Medicine is that complementary medicine is evidence-based, so it is unfair to deny it a place with other evidence-based treatments at the public trough. And this argument is valid. Complementary medicine is evidence-based. There is a vast literature of studies giving positive evidence for complementary medicine. It is published in professional-looking journals. It is peer-reviewed, for what that is worth. The evidence is apparently there that all manner of weird treatments work. But if you look more carefully at this body of literature, it looks much less impressive. The design of the studies is flawed. Controls are missing. Data is cherry-picked to support a preconceived conclusion. Alternative hypotheses that could explain the results observed are not considered. The overwhelming bulk of the great mass of evidence is just not very good evidence. 

And the same is true for many other things that are not complementary medicine: any observation that can be selected from the overwhelming deluge of data that eternally gushes out at us is evidence. 

This picture is evidence for the Loch Ness monster. 

 











This graph is evidence for anthropogenic global warming.








What we need to do is test evidence. The process of testing evidence that has been proven to work is called "science". You might remember my amendment of a Richard Dawkins quote so that it made sense:


We see some evidence, and create a model that explains the evidence. A hypothesis that the evidence means, if we do action X, we will see result Y. We do action X, and see if we see result Y. We don’t do this just once. We think about what the implications of our model are, and what new things it predicts: if it predicts we will see result Y1 if we do never-before-attempted action X1, we try action X1, and see if our model has correctly predicted this new outcome. As described, this procedure seems flimsy, because obviously an infinite number of explanations are possible for anything we observed. So we apply one simple additional requirement: our model has to be consistent with all the other models that have been tested a lot, and not shown to fail yet. Our overall model of the universe has to be consistent. This process of testing the evidence is science, and policy that is based on models describing the evidence that have passed all of these tests is science-based policy.


Clearly, evidence-based policy is better than policy based on the things on the edges of the figure below. And clearly evidence-based will shade off into a penumbra of flimsier and flimsier evidence. If we don’t have science to base our decisions on, we should base them on whatever evidence is available. But, if we have science-based understanding of a phenomenon, we should preferably base our actions on the science, not simply the evidence per se. And if we are spending other people’s money, we should spend it in the most effective way we can. Which means a science-based way where we can, instead of an evidence-based way. Is that clear? 

To recapitulate:

Science-based policy is based on a model that explains the evidence;

It is based on a model that is testable;

It is based on a model that has been tested, and not found to fail;

It is based on a model that does not contradict all the models for different phenomenon that have been tested, and not found to fail.

When we are distributing resources, we should wherever we can distribute them in a science-based way. 

I reckon this distinction between science-based and evidence-based anything is a distinction that is underutilised and valuable, and we should make it, loudly, whenever it relevant.