I was very sorry to learn on the weekend from my Mum that Prof Bob Carter had died. I had not known before that he had actually been hired by James Cook University to fill my grandfather's position, when he retired. I cannot remember ever meeting Prof Carter, but I did try to contact him once in 2007 - despite doing my best to grab his attention, see below, he must have been too overwhelmed with unsolicited email to notice me. 73 once seemed an impossibly distant age; but from here at 45 I am alarmed to realise that it is just as close as 17, which doesn't seem long ago at all. I haven't been back to Nong Khai or Pearl Harbour or Bellagio or Fargo or Bochum in the past 28 years. I probably never will. And actually talking with Prof Carter is now something that I certainly never will do. So swiftly life passes.
Carpe diem, gentle reader, carpe diem.
Greetings,
I am Bill Lacy’s grandson, PhD graduate of JCU, and a lecturer in physical
chemistry at UNE. You must be deluged with mail all the time, so I
thought I would put all those biographical bits in the first sentence to try
and grab your attention!
I think the reason there is a ‘scientific consensus’ about anthropogenic global
warming is because the mechanism is so good, even though the y = mx+c plot of
temperature vs. [CO2] looks so spotty. (http://chrisfellows.blogspot.com/2007/03/i-want-shoehorn-kind-with-teeth.html
) This makes it the opposite of continental drift, where the evidence was
so good for hundreds of years but nobody could think of a mechanism so they
quite rightly dismissed the idea. Of course, the mechanism that is so
good also implies that AGW cannot possibly be the existential threat it is made
out to be.
What I mainly want to say is that ‘AGW does not exist’ is a position that has
uncertain supply lines and is too exposed to enemy fire. Veterans like
yourself are needed to fall back and defend the stout battlements of
‘attempting to stop AGW is doomed to failure and a witless failure to
prioritise’, with Bjorn et al. (Including me, in a tiny way- see Chemistry in
Australia, Jan/Feb 2007).
Best regards,
Chris
Edit 11th February: I should say that I have reconsidered the position that I proposed to Prof Carter in the email above. While I still think that 'AGW does not exist' is not a proposition that has any chance of victory in the foreseeable future, I think that is important that people are prepared to voice it articulately and intelligently. This is because the 'Overton Window' - the volume of acceptable ideas in idea space - is defined not only by what it contains, but by what is outside it. Its limits are not fixed, and if all the ideas being expressed articulately and intelligently are within it, it will tend to contract. Keeping it open as wide as possible is good for our species.
Edit 15th June: I found out quite by accident the other day that my parents sat next to Ian Plimer at Bob Carter's funeral, which pleases me in a six-degrees-of-separation way.
Monday, February 1, 2016
Tuesday, January 19, 2016
The Science Delusion
I have some sympathy for people who believe dumb things just
because everyone else does, but I find it very tedious arguing with them on the
internet. I am equally sympathetic toward people who believe sensible things
just because everyone else does, but I am always a little nervous because I
have seen in my own life just how quickly and completely they can turn into the
first kind of people. It is awkward when
they join in to help me argue on the internet and I would rather they didn’t.
Ideally I would like people to believe sensible things for good reasons; but a
close second would be people who are willing to stand up, unsupported by a
community of like-minded believers, and believe dumb things for bizarre
idiosyncratic reasons.
Which brings me to Rupert Sheldrake and ‘The Science
Delusion’. I haven’t read the book, I
just watched the TED Talk (Banned! ZOMG!).
Sheldrake starts by defining ‘The Science Delusion’ as ‘the belief that
science already understands the nature of reality in principle, leaving only
the details to be filled in.’ This is indeed a delusion; and a dangerous one.
Making overblown claims for science, like making overblown claims for
dialectical materialism or the healing power of turtle gall bladder extract, is
going to make science look stupid when it turns out that reality is different
from what we said it was. This will lead people to distrust science and more
readily embrace the alternative, which is usually to believe all sorts of dumb
things just because everyone else does. I was heartened by this first statement
of Sheldrake’s and would have been very happy if he had gone on to explain what
science is and how it works; and how science does not, and cannot, understand
the nature of reality in principle. This would be a very valuable thing to do
in a TED talk and might just be possible in 18 minutes.
But instead, ah, Sheldrake first accuses science of being a
wholly owned subsidiary of the materialist worldview, which has never been
true, but was much more true in the late 19th century than now; then
lists ten ‘dogmas’ of science which he thinks would be better phrased as
questions; and then explores - sort of - a few of these in detail by expounding
his own bizarre idiosyncratic theory of ‘morphic resonance’ and talking about
evidence for variations in fundamental physical constants.
The ten dogmas Sheldrake quotes are:
1.
Everything is like a machine.
2.
Matter is unconscious.
3.
The laws of Nature are fixed.
4.
The total amount of energy and matter in the
universe is constant.
5.
Nature is purposeless.
6.
All heredity is material.
7.
Memories have a material existence in the brain.
8.
The mind is inside the head.
9.
Psychic woo is impossible.
10.
Mechanistic medicine is the only kind that
really works.
I’ll take each of these in turn, rephrase it as a question,
and see if I can find anything valuable in doing so.
1.
Is everything – the universe, animals, plants,
us – like a machine? Well yes, and no.
This depends on what we mean by ‘machine’ and what we mean by ‘like’. The human
mind operates on metaphor and analogy and imperfect simplified models for
complicated things, so it is quite right to say that all of these things are
‘like a machine’ in the same way as it is right to say that the planet Earth is
‘like a grapefruit’ or an electron is ‘like the planet Earth orbiting the sun’;
they are imperfect analogies that fit some properties of the object or
phenomena, and it is important to remember that this is to some extent true of
every model we use, no matter how complicated and beautiful . Every model will
be shot through with metaphors and analogies that carry emotional baggage for
good or evil, and every model will fail to correspond adequately to reality
under some conditions. These conditions may be near and common, and we might stick with our
model simply because it is the best rule of thumb we have; or they may be far
and few, and we may be misled into thinking that our model is the true picture
of reality. So yes, all these things are like machines, in that like machines,
there are conditions where simple inputs give simple outputs where we can
follow all the intermediate steps of cause and effect. We press the green
button, it engages lever C, which turns cog B, and some flap opens and we get a
can of Coke. Using the analogy of a machine for all sorts of things can be
incredibly useful. Of course, we know that living systems often rely on
complex systems with lots of inputs that surf the interface between chaos and
order (to hand wave around a lot of things that have a solid mathematical
basis) where we would be mad to design machines that work that way. And we know
that as far as we can tell, when we drill down towards more fundamental
building blocks of all the things we see, we get to the quantum world, where
particles behave less like machines than anything we can imagine.
All
in all, thinking of things in the universe as being like machines is usually going to be a
fruitful way to think about them; but we must remember that this is an analogy,
while at the same time not falling into the hippy-dippy trap of imparting
quantum weirdness to the macroscale.
2.
Is matter unconscious? I would answer not ‘yes’ or ‘no’ or ‘yes and
no’ but ‘meh’. My take on the
(un)importance of consciousness is here. I think it is quite clear that we are
part of a continuum of entities, and that a dog, for instance, has an inner
life the same as we do; and that a moth probably does, and that in some sense
so does any system that is taking in impressions from the outside and reacting to
them. It does not seem to me that the
fact that we are conscious is of any great importance for our understanding of
the universe. I feel quite strongly that setting up consciousness as something
distinct from the material world would be a retrograde step that would impede,
rather than improve, our understanding of reality; I am one of those materialists
who believe it is exhilarating and wonderful that everything is made of matter,
this fantastic and mysterious stuff.
As far as sentient galaxies and conscious
electrons – well, matter is strange. I have no evidence one way or another. Why
not? If these hypotheses can explain things otherwise unexplained, I see no reason
not to entertain them.
3.
Are the laws of nature fixed? This is a question
that is amenable to observation. Looking at rocks laid down billions of years
ago, and out into space at places billions of light years away, show things
that are consistent with the laws of nature being the same, or very, very, very
similar, everywhere and everywhen that is observable, until we get to these messy
‘singularities’. So in practical terms, the laws of nature are fixed, with the
exception of things that physicists are well aware of and worry about a lot.
This of course has nothing to do with the nature of reality in principle. The
laws of nature may not be fixed of necessity over the extent of time and space
over which we see them fixed, any more than the endless streams of traffic we
might watch from an overpass are necessarily going to stay in their right
lanes. Maybe it is just electrons obediently following Schrödinger’s equation
the way commuters are obediently following the road rules. We don’t know. But
for any practical purpose, sure. The laws of nature are fixed.
4.
Is the total amount of energy and matter in the
universe constant? This is a really a subset of 3. This is undoubtedly one of
those key laws of nature that are fixed for all practical purposes. So far,
every time we have thought this law wasn’t true, we have found another source
of energy to balance it out. But ‘continuous creation’ was a viable
cosmological hypothesis within living memory. And we know particles can ‘wink
into existence’ in the vacuum. But again,
like 3, *for any practical purpose* the answer is ‘yes’. I would be very, very,
very wary of postulating ‘maybe matter and energy aren’t conserved here’ as a
hypothesis to explain any macroscopic phenomenon. Experience has taught us that such a
hypothesis is very, very, very likely to be wrong.
5.
Is Nature purposeless? The only answer to this is ‘we don’t know’. We
cannot know without information about whatever larger reality our universe is
embedded in, information which we cannot obtain by any conceivable observation
made within our universe. It may be as futile and purposeless as a Quentin
Tarantino movie on the inside, and have been crafted for some purpose that makes sense on the outside with the same meticulous care, like a Quentin Tarantino movie has the purpose of making oodles of money.
6.
Is all heredity material? Trivially, the answer
is ‘no’, because culture is non-material heredity, and lots of social ‘higher’ vertebrates
have culture. New cultural behaviours have been observed being
developed and inherited in baboons, whales, probably magpies, and of course, us.
Less trivially, all the inherited features of organisms can be explained
perfectly well by material causes – recombination of genes, horizontal gene
transfer, maternal effect genes, environmental effects on gametes and
developing young, etc. There is *no need* to postulate an alternative
explanation based on some kind of mysterious woo that has no plausible
mechanism, to explain phenomena that are already well-explained in terms of
well-understood mechanisms. This
mysterious woo in Sheldrake’s case is what he calls ‘morphic resonance’ and his
explanation of the evidence for it is as unconvincing as twenty-seven
unconvincing things found in a viral list of ‘The Twenty-Seven Least Convincing
Arguments of 2015’.
7.
Are memories stored as material traces within
the brain? Well, you can erase memories by destroying bits of brain. Not feet, or eyeballs, or
sections of colon, or paperclips in a desk drawer, or trees in Africa, or
mountains on the dark side of the moon. This is good prima facie evidence that
memories are indeed localised in the brain, and again there is *no need* to
postulate an alternative mechanism based on mysterious woo.
8.
Is the mind inside the head? Well, yes, see #7.
9.
Is telepathy, etc., illusory? Probably. There is
no evidence for it that is screaming out to be explained. Like this paper about gender bias in physics teachers in the German-speaking world, it is just humans seeing
patterns in noise. IMHO. If evidence for telepathy or whatever emerges that screams out to be explained,
my patient conservative biases would lead me not to discount it out of hand,
but to seek explanation for it in terms of the physical science we already
know, rather than embracing the mother of all paradigm shifts into a brave new
world of woo. And I am confident that 'in terms of the physical science we already know' is where the explanation would be found.
10.
Is mechanistic medicine the only kind that
really works? Here again it depends on our definitions. There is absolutely no
compelling evidence that any treatment for anything is effective unless it affects
the physicochemical status of the human body. Do we understand the mechanism of
every treatment that is effective? We don’t. Modern medicine is still (scarily,
to me) very much a matter of ‘take this cocktail of drugs that have been
statistically shown to be effective’. Does
this mean treatments whose mechanisms are unknown will turn out to depend on
crazy new principles unrelated to existing science? Experience suggests the
answer to this is ‘no.’ Mechanistic medicine is the only kind that works, even
if we don’t understand the mechanisms yet.
So expressed with a bit of humility and nuance, there is
something in most of Sheldrake’s questions. He probably is a crazy ideologue
pushing a daft agenda (like Richard Dawkins); but a lot of what he is actually
saying is worth thinking about (again, like Richard Dawkins).
Wednesday, December 16, 2015
Okay, here's another modest proposal
You know if you’ve had more than a cursory look at this blog
that I am some way along the ‘sceptical’ continuum as far as Anthropogenic
Global Warming goes. But one thing I have learned is that it is not enough to complain about things, one ought to make constructive suggestions. So here goes.
If we were serious about (a) proving cause and effect as far as Anthropogenic Global Warming is concerned;
and (b) trying to stop it, there is one bold action I would happily get behind.
This would be grounding all the world’s aircraft for a few
years. Take all the money currently being spent on various carbon trading
schemes and bureaucracies and uneconomic renewable energy schemes and give it
to the airlines to pay them to mothball their machines and pay their staff to
sit around doing nothing.
The rationale for this is two-fold:
1. We have a pretty good idea anthropogenic cirrus clouds
from aircraft have a significant warming effect. And the warming observed over
the past half-century is localised most strongly where these anthropogenic
clouds are: in the northern hemisphere, not the south; and over continents, not
oceans.
2. All greenhouse gases are not equal. When I drive my car
down the Princes Highway past the towering eucalypts of Royal National Park, I know
that the water and carbon dioxide bands in the atmosphere at ground level are
pretty nearly saturated, so the emissions of my car will not make a great deal
of difference to how much additional infrared energy is absorbed. And I know
also that my car’s emissions are not going to stay in the atmosphere for long,
because those aforesaid towering eucalypts and other green plants are going to
enthusiastically suck them up. When I fly down to Sydney, though, it worries
me. The plane I’m riding is spewing carbon dioxide and water vapour out into
a part of the atmosphere that doesn’t have a lot in it already, a long long way away from any
plants that can use them.
I’ve made these arguments before on this blog, but not
recently. So I figured it was time for some repetition.
Stopping aircraft for a few years should give a very good
idea what proportion of the observed warming is due to anthropogenic cloud and
emissions of greenhouse gases at altitude, and hence whether carbon dioxide
emissions per se are worth stressing over.
I think the economics of this are solid. The maximum annual
profit airlines have made recently seems to be of order $30 billion,
and there probably aren’t more than 4 million people who would need to be paid to do nothing, as opposed to being swapped immediately
to productive jobs elsewhere in the newly frisky sea and rail freight sectors.
So maybe another $100 billion paid to them. That’s less than we’ve been
spending in silly ways in recent years, I’m pretty sure.
Tuesday, December 15, 2015
Disintermediation
Historically, scientific journals got started as a way to
share information. They were the most effective ways to tell other researchers
what you were doing and find out what they were doing. Nowadays, they aren’t
really. They are complicated intermediaries – which add some value, true –
between you and the people you want to share information with. They have rules
which are largely arbitrary and which impact negatively on how useful they are as media for sharing information
with people – rules about how long a paper should be, about how it should be
structured, about what should be put in and what should be left out. More
importantly, their primary function nowadays is not to share information, but
to score points in The Academia GameTM, one of the first and most
dramatic successes of the ‘gamification’ craze.
Anyhow, I think the search engines we have nowadays are good
enough to cope with a bit of disintermediation; so I thought I would have a go
at sharing my information here instead. Some of it, anyways. I’ve got a piece
of work, you see, that I can’t see scoring any points, and I want to tell you
about it.
Back in my PhD I came across a paper on work done by two
researchers at the University of Texas El Paso in the 1970s, Wang and Cabaness.
In
this paper they had investigated the copolymerisation of acrylic acid (AA) and
acrylamide (AAm) in the presence of a
number of Lewis Acids of general formula XCl4, and reported that
tin tetrachloride could induce the formation of a copolymer with the regular
repeating formula ((AA)4(AAm)) – four acrylic acid residues in a
row, followed by an acrylamide unit, rinse repeat – which they attributed to a
1:1 alternating copolymerisation of a SnCl4(AA)4 complex
and acrylamide.
![]() |
Structure postulated by Cabaness and Wang 1975 |
I was intrigued by this article, because I was also studying
polymerisation with Lewis Acids (only alkyl aluminium chlorides rather than the
XCl4 species), in systems where we got 1:1 alternating copolymers of
donor monomers (like styrene, butadiene, vinyl acetate, or acenaphthalene) with
acceptor monomers (like acrylates, methacrylates, and oh yes, acrylamides and
methacrylamides). The whole basis of our understanding of these systems was
that complexes of acceptor monomers with Lewis Acids made them fantastically
better at being acceptor monomers, and I couldn’t see how four monomers that
were all complexed to a single Lewis Acid ought to polymerise together: if they
were better acceptors, they would be less likely to polymerise with each other,
and once one added onto a growing polymer chain it seemed to me that it would
be more likely to add a (relatively donor-ish) free acrylamide rather than one
of its fellow complexed acrylic acids that was probably held in a sterically
unfavourable position, as well as an electronically unfavourable condition. And
4:1 regular polymers had never been reported with acrylic acid and any more
conventional donor monomers that would be more likely to behave themselves. It
was all very mysterious. Nobody had ever confirmed or followed up on this work
of Wang and Cabaness; which was disappointing, but not very surprising, because
the whole field of playing with various Lewis Acids and donor and acceptor
monomers had been a big thing over approximately the years 1968-1975 and had
then petered out for no good reason.
So many years later I found a bottle of SnCl4
lying around (it’s a liquid; it comes in bottles – the Sn(IV)-Cl bond has a lot
of covalent character) and remembered this paper and thought I would have a go.
Wang and Cabaness had only looked at their polymers using elemental analysis,
which realistically tells you about 2/10 of not very much about polymer
structure, whereas I had spent quite a lot of time looking for regularity in
polymer sequences using Nuclear Magnetic Resonance Spectroscopy, which tells
you an awful lot, and I thought I would make the polymers they made and have a
look at them with NMR. The proton NMR spectra of polymers are always broad, and
the backbone protons of acrylic acid and acrylamide residues (as you can guess
from their structures) end up on top of each other in a messy way. So the way to tell what is what is to do
carbon-13 NMR spectra, which gives you nicely resolved peaks in the carbonyl
region, and reasonably well resolved peaks for the methine carbons. If you look at the carbonyl region of a
copolymer of AA and AAm, you can pick out the six different nearest-neighbour
environments quite nicely. In the figures below, for example, you can see
clearly how base hydrolysis of PAAm generates isolated AA units on the chain,
which have a protective effect on neighbouring AAm and make it much less likely
that they will be hydrolysed.
![]() |
Table 3 - the tin tetrachloride data - from Wang and Cabaness |
I first tried doing what I would usually do, which is
freeze/thaw degassing. I
still had a significant inhibition period after I put the samples in a 60 °C
oil bath. When anything happened, it happened too fast for me to stop it, and
what it was, was my polymer ‘popcorning’ into a solid mass. This is something
that happens with monomers that polymerise very quickly. Polymer solutions are
viscous, and transfer heat worse the more viscous they get. Polymerisation reactions are exothermic. So,
a polymerisation that goes quickly generates a lot of heat and a viscous
solutions which makes it difficult for the heat to dissipate, and as the
temperature increases the reaction goes faster, making it even hotter and more
viscous, and you get very quickly to a temperature where your solvent
vaporises, and your polymer chars, and if you are doing a reaction in a 20
tonne reactor instead of a tiny little tube you ring the insurance company.
That is what happened to my first attempts: they polymerised into intractable
masses that I couldn’t get out of my reaction vessels without smashing them and
then wouldn’t dissolve once I’d smashed them out.
I decided to drop the freeze/thaw degassing and use the
dodgier ‘sparge with nitrogen’ method in round bottomed flasks that would be
(should be) easier to get the polymer out of.
Yes, I could get the polymer out without smashing anything.
But the reactions that formed it were the same: the vessel sat there for a
while without doing anything, then suddenly there was the popcorny noise of
solvent vaporising and insoluble masses with yields of approximately 100%.
I cut the temperature a bit further, and cut the
concentration of everything a bit, and still couldn’t get any useful
polymer.
Then my colleague Daniel Keddie suggested something that
made a lot of sense which I should have thought of a long time before: why not
put in a chain transfer agent? This is something that cuts the length of the
polymer chains but shouldn’t (knock wood) have any more dramatic effects on the
chemistry of the reaction: it just introduces a ‘jump to a new chain’ step that
replaces some of the propagation steps. So I put in enough butanethiol to reduce
the degree of polymerisation to about 25 and had another go. And I got polymers
I could remove from round bottom flask, which actually dissolved up okay! These
proceeded to a conversion of about 40-60% when heated at 60 °C for 30 minutes –
again, almost all of this time was inhibition period, so I couldn’t actually
stop any of the reactions at a low enough conversion to get an unambiguous
relation between the composition of the polymer and the feed composition of the
monomers.
Following the next step of the procedure – dissolving in
water and reprecipitating in methanol – was a little tricky, because the
polymers were reluctant to dissolve in plain water. So I added a bit of base to
convert the acrylic acid residues to sodium acrylate – and crossed my finger
and kept the temperature low to avoid hydrolysing any of the acrylamide
residues to sodium acrylate. And I managed to get some halfway decent carbon-13
NMR by running these overnight, like so:![]() |
Carbonyl region of 13C NMR spectra of AA:AAm:SnCl4 polymers made by lil' ole me |
Besides these negative results, though, there isn’t a lot
that I can say. The tin tetrachloride is doing something: it is making the
reaction go a lot faster than it would. It
*might* be encouraging a tendency towards alternation. Because of composition
drift, though, and because of the uncertainty in the literature reactivity ratios,
I can’t tell for sure whether there is any shift in the polymer composition
compared to what I would see without the tin tetrachloride. It looks to me like I am getting a
significant amounts of base hydrolysis of any acrylamide residues which aren’t
alternating – which is what you would expect from the literature.
In order to publish this, I would need to make sure my 13C
NMR was quantitative, which wouldn’t be too hard. I would also need to work out
a way to kill my reactions quickly, and I would need to figure out a way to
reprecipitate the polymers without hydrolysing the acrylamide residues.
Obviously these are soluble problems, but this system isn’t doing the really
exciting thing it was reported as possibly doing, and doesn’t appear to be doing
anything moderately exciting, so I don’t know if it is worth carrying on with.
Conclusions:
1)
I have no idea how Wang and Cabaness got
something out their system that they could dissolve and reprecipitate and find
viscosities of.
2)
There is no evidence that the 4:1 acrylic acid:
acrylamide composition they report can be reproduced.
3)
People probably tried to repeat their work
before, and came to the same conclusion, but didn’t put it on the internet.
Wednesday, July 22, 2015
Wednesday, July 1, 2015
One may be regarded as a misfortune; two begins to look like carelessness.
This is a post not so much about planetary science, as about
the sociology of science. If you have been following recent developments in the
astronomy of the solar system at all, you will know that the comet 67P-Churyumov-Gerasimenko
has a funny shape. The standard working explanation for this is that it has a
funny shape because it has been formed from two comets that have collided and
stuck together, a ‘contact binary’, and explanations that ascribe its shape to
deformation or ablation of an initially more spherical object are marginalised.
Yesterday I found out, being a slow newcomer in the comet world, that 67P-Churyumov-Gerasimenko is not the only comet with a funny shape. There is 103P-Hartley 2. It has a relatively smooth neck, and this neck is also associated with direct sublimation of water, as would be expected if it had a fresher surface that had been exposed for a shorter time.
And – ahem – it was also suggested that 103P-Hartley 2 is a
contact binary.
To quote Lori Feaga and Mike A’hearn of the University of Maryland*, 8th
October 2011: "The heterogeneity between lobes is most likely due to
compositional differences in the originally accreted material. We are
speculating that this means that the two lobes of the comet formed in different
places in the Solar System. They came
together in a gradual collision and the central part of the dog-bone was
in-filled with dust and ice from the debris."
I have not understood why, but now I think I do.
First of all, I should explain that my biases are all
the other way around. I would be inclined to go through the most elaborate
mental gymnastics to avoid having to suppose a body was formed from two things running into each other. This
is based on two things: firstly, common to all people who do not get their
picture of space primarily from space opera, my perception of space is
of someplace that is very big and very empty where things move very fast. I do not
expect to be hit by a comet any time soon. If I am, I expect the relative
difference in our velocities to be such that we will not end up as a ‘contact
binary’, but as a mass of superheated comet and minor scientist-derived dust.
Secondly, in my particular case, I am used to thinking about chemical
reactions, and I know that almost all collisions of molecular bodies in the gas
phase do not end up with them sticking together; without a third body to take
excess kinetic energy away, they are too energetic to stick together, and fly
apart again. I know very well that comets are not very much like molecules, and
there is no reason to behave in exactly the same way, but that is the intuitive
bias I have about things running into each other in a big empty space: chances
are that if they not smashed to flinders they will bounce off one another, and never come within coo-ee of each
other ever again.
I have been drawn inexorably into thinking about comets by Andrew Cooper and Marco Parigi, and I interpret the shape of 67P-Churyumov-Gerasimenko
as they do, in terms of the ‘Malteser’ model I have outlined below: a brittle,
dense crust has developed on the comet surrounding a deformable core, and at
some time in the past this crust has ruptured and the gooey interior has
stretched. If this were true, unless it happened a very long time ago indeed, the
neck of the comet should be less depleted in volatiles, and that is where most
of the water sublimation should be coming from. This seems to be the case.
![]() |
67P-Churyumov-Gerasimenko, being awesome. |
Yesterday I found out, being a slow newcomer in the comet world, that 67P-Churyumov-Gerasimenko is not the only comet with a funny shape. There is 103P-Hartley 2. It has a relatively smooth neck, and this neck is also associated with direct sublimation of water, as would be expected if it had a fresher surface that had been exposed for a shorter time.
![]() |
103P-Hartley2. The bright light from the rough end is coming predominantly from chunks breaking off a frangible surface and subliming, if I have understood correctly. |
I might be prepared to grant one contact binary, since the
universe is a strange place, and all manner of things can happen. But two from
such a small sample size? I don’t think so.
But as I said at the beginning, this isn’t supposed to be a
post about planetary science. It finally struck me yesterday why so many
people would cling to the idea that funny comets are formed by bodies sticking
together.
Paper after paper refers to comets as ‘primitive’; relics of
the primordial Ur-cloud out of which the solar system accreted, from which we
can learn about the nature of the Ur-cloud. If comets are not primitive – if they
have suffered all sorts of physical and chemical transformations as a result of
repeated annealing by swinging by the sun – then they don’t tell us anything
about the Ur-cloud.
I think a lot of people who study comets are very interested in the Ur-cloud. They got into comets in the first place as the best way they could approach this beast, whose nature is so important for understanding the overall history of the solar system and all those other solar systems out there. They are not necessarily interested in comets as ends in themselves. The contact binary model is the one way that observed heterogeneous, oddly-shaped and oddly-behaved comets can be reconciled with observable comets as primordial bodies. So, if you are interested in the antiquity of the solar system, what are you going to do? Accept that ‘you can’t get there from here’, or grab at this improbable lifeline that leaves the road unblocked?
I think a lot of people who study comets are very interested in the Ur-cloud. They got into comets in the first place as the best way they could approach this beast, whose nature is so important for understanding the overall history of the solar system and all those other solar systems out there. They are not necessarily interested in comets as ends in themselves. The contact binary model is the one way that observed heterogeneous, oddly-shaped and oddly-behaved comets can be reconciled with observable comets as primordial bodies. So, if you are interested in the antiquity of the solar system, what are you going to do? Accept that ‘you can’t get there from here’, or grab at this improbable lifeline that leaves the road unblocked?
*: In the interests of full disclosure, I was once a resident of Maryland, when my father was in the army. We moved back to Arizona when I was 2.
Tuesday, June 30, 2015
Questions Nobody is Asking, #2137 in a series
My Google skills are failing me.
I've tried a couple of times recently to find an answer to the question, 'How does Google Scholar decide what order to list your co-authors in?'
Clearly, it's not alphabetical.
It doesn't seem to depend on *when* we published together.
Or when my co-authors took ownership of their Google Scholar profile.
Or how often we have published together, or how well the publications we have *together* have been cited.
It doesn't fit with any indices divided by years since first publication, or when a co-author first published, or any simple combination of any of the above that I can think of.
The one thing I can be sure of is that it is linked to individual papers somehow,
I've tried a couple of times recently to find an answer to the question, 'How does Google Scholar decide what order to list your co-authors in?'
Clearly, it's not alphabetical.
It doesn't seem to be based on any of the six indices that are quoted for each academic: not total publications, or publications since year x, or h-index, or h-index since year x, or i10-index, or i10-index since year x.
It doesn't seem to depend on *when* we published together.
Or when my co-authors took ownership of their Google Scholar profile.
Or how often we have published together, or how well the publications we have *together* have been cited.
It doesn't fit with any indices divided by years since first publication, or when a co-author first published, or any simple combination of any of the above that I can think of.
The one thing I can be sure of is that it is linked to individual papers somehow,

And I was about to say, aha, it is something to do with the impact factor of where we first published together, since my last two co-authors I am only on conference papers or book chapters with; but that isn't going to work either.
I dunno. I expect it is some complex proprietary multi-factorial algorithm. Names never seem to move once they are in the list, so maybe it would make sense if I looked at the values of some parameter or other at the time a co-author was added to the list. Or else my basic observation skills and Google skills are equally weak lately and I need a holiday to be fit for mental work. Or - maybe it is totally random, so no one has to feel bad about being everybody's last listed co-author.
Monday, March 16, 2015
Another Brief Statement of the Obvious
There are four chief obstacles to grasping truth, which
hinder every man, however learned, and scarcely allow anyone to win a clear
title to knowledge: namely, submission to faulty and unworthy authority,
influence of custom, popular prejudice, and concealment of our own ignorance accompanied by ostentatious display of our knowledge.
-
Roger Bacon (1266)
Wednesday, March 11, 2015
A naive look at comets
Okay, the relentless drip-drip-drip of Marco's comet obsession has finally gotten to me. I said I was going to post something about how you could get a mass distribution in comets that was consistent with the 'brittle crust, gooey low-tensile strength centre' required for the observed morphology of 67P-Churyumov-Gerasimenko.
So here it is.
As it gets warmer on the surface stuff that is most
volatile, like methane, will sublime first, and leave behind a very fluffy
residue of stuff that doesn’t want to sublime as easily.
So here it is.
I’m going to assume we start with a homogeneous mass
containing all the things that will be solid grains out at a temperature of
very few K and that have been shown to be in comets. So sodium and magnesium
silicates, and ammonia, and methanol, and carbon dioxide, and carbon monoxide,
and water, and methane – and importantly some higher hydrocarbons, which
haven’t showed up in the gas from 67P-Churyumov-Gerasimenko but seem to be
attested in other systems. I guess the primordial assumption behind this is
that comets formed from the outer edges of the big solar system Ur-cloud, so
weren’t subjected to much in the way of fractionation processes beforehand. If
this is some terrible error in cometology, please let me know.
We roll all these things into a ball using gravityTM
and hurl it towards the sun.
Some of this fluffy stuff will flake off and float away; probably a lot;
but it will also invariably compact as bits flake off and lodge with other
bits, and at temperatures where higher organics are not volatile but the small
molar mass stuff is, these latter will end up concentrated on the surface where they can
soak up all those lovely cosmic rays and start forming larger crosslinked
molecules by radical-radical recombination.
Okay, if the mass of the comet is small this evaporation process will
just continue until you are left with a relatively dense mass of non-volatiles. But if the
mass is bigger, this surface layer – which is gradually getting more compact
and more enriched in high molar mass organics as we approach the sun – will
start acting as a barrier to the progress of gas molecules from the warming
interior.
And eventually, I can imagine this means there won’t be
vacuum at the interior surfaces where sublimation is occurring, but appreciable
pressure. There seem to be abundant studies in the literature for gas building
up in comets.
Now, once the pressure and temperature increase above the triple
point of any of the constituents of the comet interior, there will be the
possibility for liquid to exist there.
I’ve plotted the triple points of some constituents of
comets below.
(Note, if you are unfamiliar with triple points, that these
are the bottom points of triangular areas where the liquid state can exist. The
exact phase diagrams of things are much more difficult to find than the triple
points and may not have been mapped out in detail for all of these.)
The species that are going to be liquid at relatively low
temperatures and pressures are relatively minor constituents of comets, of the
order of a few percent in total (probably), so we won’t get a gooey liquid
core, but a ‘moist’ core with liquid on the surfaces of solid grains, like a
fairly dry soil. This will happen at temperatures and pressures far below what
we would need to liquefy water, and these liquids will persist in the core
as more volatile components turn into gas and bubble out, while the shape of
the comet is maintained by the outer layer of non-volatiles gummed together by
organic polymer.
Now, when the comet flies out into the cold again, the
interior part of this liquid core is likely to freeze before it evaporates, leaving the core of the comet
no longer a loose aggregation of stuff, but particles of relatively
non-volatile components sintered together with a layer of frozen hydrocarbons.
This will still have a low tensile strength, but I would think this sintering
would significantly improve its compressive strength. And the different levels of degassing possible from a fairly rigid external shell would fit the great variability in comet densities observed. There may also be
interesting fractionation of one component freezing before another, depending
on how good the outer layer is at retarding heat and mass transfer from the
interior. How hot can it get inside? How high can the pressure get? I guess I should look up people's best guesses for those questions. I can imagine the shell getting stronger and stronger on repeated passes of the sun if it doesn't break, sustaining higher pressures inside, and eventually allowing the same sintered morphology to be reproduced with water ice holding inorganic particles together.
Saturday, February 14, 2015
Define. Clarify. Repeat.
Being A New Post for Marco to Hook Comments On
Like most people with an interest in the historical
sciences, Alfred Wegener is one of my heroes. He marshalled an overwhelmingly
convincing mass of evidence of the need for continental drift, argued his case
coherently and courageously against monolithic opposition, and was eventually
vindicated long after he disappeared on a macho scientific expedition across
the Greenland ice cap.
Like most people who have thought about it seriously, the pillars
of the scientific establishment who mocked Alfred Wegener are also my heroes.
Because no matter how much evidence there is of the need for a new theory, you
can’t throw out the old theory until you have a new theory. And for a new
theory to be science, it needs to have a plausible mechanism. And in the case
of continental drift, there was no proposal for how it could possibly have
happened that was not obviously wrong until the mid-ocean ridges were
discovered, long after Wegener’s death.
To generalise: if you are an iconoclast who wants to
convince me to change my mind about something scientific, you need to do two
things. (1) Present an overwhelming mass of evidence that the existing models
are inadequate: there has to be something that needs to be explained. (2)
Present some vaguely plausible model, consistent with the other things we know,
that explains this stuff that needs to be explained.
The rest of this post
is just going to be me arguing with Marco, so I’ll see the rest of you later.
:)
***
From the most recent comment of Marco down on the ‘Yes, Natural Selection on Random Mutations is
Sufficient’ post:
I'm just going to summarise what I believe to be
the source of our differences.
It does not make sense to talk about the source of our
differences. You have not yet clarified your model sufficiently for it to be
meaningful to talk about the differences between us. As the iconoclast, you need to present
overwhelming evidence that the existing model needs to be changed, and some
plausible mechanism for an alternative model. These are both necessary. Reiterating that you see the need for a change, and advancing very vague mechanisms that are not linked to the known facts of molecular biology, are never going to convince me. Of course there is no need for you to convince me; but if you want to convince anyone in the scientific community, these are the two things that you need to do.
1) Experimental basis - your mentioning that a
synthesis cannot be experimentally derived by definition, is to me an admission
that it is not strictly science. You do believe it to be science by a reasoning
I do not accept.
The only way I can construe this statement is that the
historical sciences in toto are not science. To me, this is an unreasonable
limitation of the meaning of ‘science’. It impacts all possible mechanisms of
evolution.
2) expanding informatics principles to genetics - genes as a store of information and instructions analogous to a computer algorithm. To me it is obvious and valid - to you (and biologists in general) it is a big no-no.
This is because biologists know more than you do. The relationship
of genes to computer algorithms is only an analogy, and it is a very weak
analogy.
3) definition of randomness - Ditching a statistically valid definition of random in favour of a statutory functional one makes the synthesis *not falsifiable* in the statistical sense. One should be able to verify that a simulation based on statistical randomness would come to the same probability distribution.
4) Dismissal of trivial non-randomness. You appear to do this equally for biochemical reasons that mutations would happen in certain areas more than others that are not proportional in any way to the function, but also it appears for things like horizontal gene transfer and epigenetics effects. To me it is an admission that random is incorrect even in your functional description. For instance, I think it is as reasonable a synthesis that horizontal gene transfer explains the spread of all beneficial mutations. I do not think that it is the whole story, but the standard evolutionary synthesis crowds out other ideas as if it had been experimentally verified.
I am absolutely sure that a simulation based on statistical
randomness could show natural selection
on random variations was sufficient to account for biological change. I am absolutely sure
that a simulation based on trival non-randomness could show natural selection
on what I call trivially non-random variations was sufficient to account for biological change. Alternatively,
I am sure that a simulation based on statistical randomness could show natural selection on random
variations was not sufficient to account for biological change. None of these
simulations would necessarily tell us anything about reality. The real system
that we are trying to simulate is too complicated. Modelling is not experiment. All models are only as good as their assumptions. This quibbling about definitions of randomness is, to me, irrelevant and uninteresting. It does not get us one step closer towards identifying a deficiency in the standard model, nor clarifying a plausible mechanism for directed evolution.
Subscribe to:
Posts (Atom)