Thursday 26 September 2013

Measuring Rainfall

Measuring Rainfall by Chris Skinner (@cloudskinner)

Before I embarked upon my PhD research I had not paid much attention to how we recorded rainfall. My previous experience, probably like many people, came from my Primary School that had a small weather station in the grounds, that consisted of a weather vane for measuring wind direction, and a raingauge for recording the rainfall. It was nothing more than a small bucket, which collected the rainfall and you recorded the level from the side each day. If it was up to 4mm, you would record it as 4mm of rain having fallen in the last day.

That was it, as far as my knowledge went, and as far as I assumed it went in regards to recording rainfall for the weather forecasts. I wasn’t wrong, the Met Office here in the UK do still make extensive use of raingauges to observe rainfall. I will let Ralph James explain them to you –


However, as I soon learnt, raingauges only measure rainfall at one stationary point. The little bucket I used at my Primary School could tell me how much rain fell at the school, but it could not tell me how much rain fell at my house, or how extensive that rainfall was. To fill in the gaps, meteorologists use weather radars. Over to Biz Kyte –



Brilliant! There we have it then, measuring rainfall, easy peasy. You just need a network of thousands of raingauges, enough radar stations to cover your country and enough highly qualified engineers and scientists to operate and maintain it all.

You won’t be surprised to hear that these conditions do not extend to many areas of the world. Sub-Saharan Africa for example has not had the resources and/or the political will to establish the infrastructure required for timely, accurate rainfall observations, and this has implications when trying to forecast floods, crop yields, droughts or water resources. Obviously, being able to observe rainfall in realtime in this region would be greatly beneficial, but the installation and maintenance of raingauge and radar networks is just not currently feasible.

One way is to turn to satellite observations. Satellite platforms carrying Passive Microwave (PM) sensors are the most accurate for this role, with the instruments measuring the amount of microwave backscatter from the Earth’s surface. As droplets of water scatter the microwave signal in a distinctive way it is possible to directly observe where it is raining and its relative intensity. But (there’s always a but), PM sensors have to be placed in Low Earth Orbits (LEO) to operate, and therefore travel over the planet’s surface, recording snap shots of the rainfall as it goes. To add to problem, sandy ground scatters microwaves in a similar way to water, making observations by PM satellites more difficult in arid regions, such as much of sub-Saharan Africa.

Another way, such as that adopted by the TAMSAT team at the University of Reading, is to use Thermal InfraRed (TIR) instruments mounted on geostationary platforms. These satellites orbit at a distance that allows them to orbit at the same speed as the Earth’s rotation, meaning they always observe the same area of the planet’s surface - this is known as a geo-stationary orbit. TAMSAT use a relationship called Cold Cloud Duration (CCD), where it is assumed that if a cloud is cold enough, it will be raining, and the amount of time a cloud is below that temperature will let the team calculate the rate of rainfall. It is an indirect relationship, so it does not directly record the rainfall, but it does provide an estimate that is accurate enough, and timely enough, to be useful in forecasting seasonal crop yields or droughts.

Again, there is a but. TAMSAT produces ten-day observations, useful for the above applications, but not very useful for flood forecasting, for example, that requires realtime observations at atleast a daily timestep. It is possible to use the CCD method for this, but the observations are highly uncertain so require some complex statistics to be properly used. This has led meteorologists to get creative.

Telecommunications are taking off in sub-Saharan Africa, with mobile phones spreading fast. Professor Hagit Messer, of the University of Tel Aviv, suggested that interference of signals sent between antennas by rainfall could be used to measure the rainfall rate between the antennas. Over a whole network of telecommunication antennae the spatial spread of rainfall and its intensity could be built up, evolving over time. This form of rainfall observation could be used to dramatically improve the spatial and temporal coverage over sub-Saharan Africa, with little need for additional investment.

And again, there is a but. Whether it is observation by radar, satellite or telecommunication networks, the instruments can only record where it is raining, when it is raining, and the relative intensity of the rainfall. That relative intensity needs calibrating, bringing scientists full circle back to the humble raingauge. There are raingauges in sub-Saharan Africa, but not a lot of them. The study area I researched had one gauge per 7,000km^2, enough to cover the whole of the UK with just 27 raingauges, and of course these weren’t evenly spread, concentrated along rivers and in towns, leaving large areas relatively uncovered.  They can also be poorly maintained and not all raingauges record all of the time.

There are some good stories about raingauges in Africa. A couple I have heard from the TAMSAT team are of one gauge that recorded no rainfall at night, even during the wet season. When investigated it was found the locals looking after the gauge were storing inside so it wouldn’t be stolen. Another gauge was consistently recording a light drizzle – this was caused by people hanging wet clothes on it to dry. We have similar issues in the UK, with one organisation who should know better placing a gauge on their roof next to an air conditioning vent that blew rainfall away from it.


One project that I am excited about is TAHMO. The project team have the highly ambitious objective of dramatically increasing the raingauge coverage (as well as coverage of other meteorological instruments), for sub-Saharan Africa by mass producing a cheap, self-contained weather station and distributing them to schools. One of the most significant outputs to date was the creation of a low cost acoustic disdrometer, that uses the vibrations of falling raindrops to measure rainfall rates and reports the readings automatically using mobile phone technology. For me, this is the great hope of rainfall observation for poorly gauged regions and really hope they can pull it off. For now, I’ll leave you with Rolf Hut discussing TAHMO, acoustic disdrometers and tinkering.


Wednesday 18 September 2013

Do hot spots wiggle? - Limited latitudinal mantle plume motion for the Louisville hotspot



The Louisville mantle plume, responsible for creating a 4300 km chain of volcanoes is fixed with limited motion. This is the finding of a new study published late last year in Nature Geoscience, and is the result of a 2 month International Ocean Drilling Program (IODP) expedition (Expedition 330) to the SW Pacific Ocean in 2010, on which I sailed as an igneous petrologist. This blog takes us through why this is so important and how the study was completed.

by Rebecca Williams (@volcanologist

The Louisville seamount trail is a chain of volcanoes that stretches for over 4000 km. The oldest volcano, which is right next to the Tonga-Kermadec Trench, is around 80 million years old. At the south-eastern end of the chain, the youngest volcano is thought to be around 1 million years old. The linear chain appears to have an age progression along its length, meaning that they get older as you go from the SE end of the chain towards the Tonga-Kermadec Trench. The volcanoes are also what we call ‘intraplate’ volcanoes, which means that they are not found on plate margins where we expect to find volcanoes, like around the Pacific Ring of Fire. All this suggests that the Louisville Seamount Trail is the result of hotspot activity and is the SW Pacific equivalent of the Hawaii-Emperor Seamount Trail. In fact, the Louisville chain trends in the exact same way as the Hawaii-Emperor chain.
 
Hotspots are thought to be stationary thermal anomalies in the mantle that may originate at the core-mantle boundary. These fixed points, and the trails of volcanoes that they produce, have been essential in our understanding of plate motions. For example, if the volcano chain is 1000 km long, and there is an age gap of 10 Ma between the south-eastern most volcano and the north-western most volcano, then we can infer that the plate has been moving over the hotspot at a rate of 1000 km per 10 million years, or 10 cm per year, to the north-west.

The Louisville Seamount Chain, adapted from Koppers et al., 2013.Notice the age progression of the seamounts and the general trend, equivalent to the Emperor-Hawaii Seamount Trail.

However, when the Emperor Seamount Trail was drilled during ODP (a former version of IODP) Leg 197, the scientists found that during a period between 50 and 80 million years ago, the Hawaii hotspot actually moved! In fact, its latitude changed by 15° over this time. So, if Louisville is a SW equivalent of Hawaii, did its hotspot move as well? Are hotspots really fixed or do they wiggle about? Was this movement caused by dynamics within the mantle – a mantle wind that would have displaced both the Hawaii and the Louisville plumes? 

Expedition 330 was designed to test this hypothesis. From Dec 2010 to Jan 2011, the Joides Resolution drilled 5 seamounts of an equivalent age to the Emperor Seamounts which demonstrated plume motion. We drilled through 1068.2 m of rock, through thin sedimentary covers and deep into the volcanic succession. Record recovery rates (amount of rock recovered in core vs amount of rock drilled through) of 87.8% (72.4% average) meant that we had plenty of volcanic rocks to study. In order to test the hypothesis of plume motion, the palaeomagnetic inclination of the volcanic rocks and the age of the rocks were deduced.
 
When a hot rock cools (below the curie point), its magnetic minerals align to the magnetic field of the earth at the time it cools. The earth’s magnetic pole changes over earth’s history, so the alignment of the magnetic minerals will be different in different aged rocks. Palaeomagnetists can measure that alignment in the rock. If they analyse many rocks over a period of around 1 million years, these values for magnetic north should roughly average out to be the same as geographic north. If there is still an inclination in the value they get, this must be due to the latitude at which the rocks formed. We can date these rocks using a technique called radiometric dating (40Ar/39Ar) so we know exactly how old the rocks are and look at any changes in inclination through time.

Lithostratigraphy and corresponding inclination data for Rigil Seamount. Core logging and palaeomagnetic data was all carried out onboard by expedition scientists during Expedition 330. From Koppers et al., 2012 (Nature GeoScience)

Right now, the Louisville hotspot is at 50°26’S and 139°09’W. Rocks from four of the seamounts we drilled were studied: Canopus Guyot, (74 Ma) Rigil Guyot (70 Ma), Burton (64 Ma) and Hadar Guyot (50 Ma). It was found that Rigil Seamount has an average palaeolatitude of 47.0° S (+10.5°/-5.6°) which is comparable to the current location of the Louisville hotspot at ~51°S, as are the estimates for the Burton and Hadar Seamounts. The oldest seamount, Canopus does have a lower palaeolatitude of around 43.9°S and this may mean that there was some motion towards the southwest at this time. The best estimates are that there has been limited 3-5° latitudinal movement of the Louisville plume since 70 million years ago. 

The study concludes that the Louisville plume is relatively fixed. When compared to the Hawaii plume, which had a rapid 10° southern shift during this time period, the Louisville plume had independent motion and there is no evidence for the proposed mantle wind. This means that, when considering plate motions, the shape and age progression of the Louisville seamount chain is a more robust dataset for calculating Pacific Plate motion, than the Hawaiian-Emperor chain. Since ODP Leg 197, the sharp bend in this chain has been reinterpreted to reflect the effect of plume motion, rather than a change in motion of thePacific Plate . The Louisville dataset now lends support to this.

Work is now ongoing by shipboard scientists to understand more fully the Louisville Seamount Chain. I, with a variety of co-authors, am characterizing the geochemical evolution of the chain and attempting to understand its mantle source by looking at its whole rock geochemistry and Hf (and Nd-Pb-Sr) isotope signatures. Watch this space for this research – I’ll blog on it as it’s published.

This blog is based on:

Koppers,A.P. ; Yamazaki, T.; Geldmacher, J.; Gee, J.S.; Pressling, N.; Hoshi, H.; Anderson, L.; Beier, C.; Buchs, D. M.; Chen, L-H.; Cohen, B. E.; Deschamps, F.; Dorais, M. J.; Ebuna, D.; Ehmann, S.; Fitton, J. G.; Fulton, P. M.; Ganbat, E.; Hamelin, C.; Hanyu, T.; Kalnins, L.; Kell, J.; Machida, S.; Mahoney, J. J.; Moriya, K.; Nichols, A. R. L.; Rausch, S.; Sano, S-i.; Sylvan, J. B.; & Williams, R. 2012. Limited latitudinal mantle plume motion for the Louisville hotspot. Nature Geoscience 6, 76 doi:10.1038/ngeo1677 http://www.nature.com/ngeo/journal/v5/n12/full/ngeo1638.html
(Contact me for a PDF)

More information on the expedition can be found here:

Koppers, A.A.P., Yamazaki, T., Geldmacher, J., and the Expedition 330 Scientists; 2013. IODP Expedition 330: Drilling the Louisville Seamount Trail in the SW Pacific. Scientific Drilling, No. 15, March 2013. doi:10.2204/iodp.sd.15.02.2013

Koppers, A.A.P., Yamazaki, T., Geldmacher, J., and the Expedition 330 Scientists; 2012. Volume 330 Expedition Reports – Louisville Seamount Trail. Proc. IODP, 330: Tokyo (Integrated Ocean Drilling Program Management International, Inc.). doi:10.2204/iodp.proc.330.2012 http://publications.iodp.org/proceedings/330/330title.htm

Fitton, J.G.; Williams, R.; Anderson, L.; Kalnins, L.; Pressling, N.; 2011. Expedition 330: The Louisville Seamount Chain. UKIODP Newsletter 36, August 2011. http://www.bgs.ac.uk/iodp/docs/UKIODP_36.pdf

Expedition 330 Scientists, (2011). Louisville Seamount Trail: implications for geodynamic mantle flow models and the geochemical evolution of primary hotspots. IODP Preliminary Report 330. doi:10.2204/iodp.pr.330.2011.

Parts of this blog have previously appeared in R. Williams’ Expedition 330 blog here: http://joidesresolution.org/blog/252

Wednesday 11 September 2013

Compare and Contrast: conference size and conference experiences

by Dr Jane Bunting

As you might have noticed, it’s conference season in academia (the largest one, anyway – smaller flurries around Christmas and Easter also occur).  I was very restrained, and restricted myself to attending two meetings this summer.  That was partly a financial decision – resources are limited, and if I need to pay part of the costs for going to a meeting myself I need to have a pretty compelling reason to go – and partly because Dr Michelle Farrell and I were hosting one of them, so anticipated a lot of work which would distract from doing actual research.  We’ll say more about the science elements of each conference later, but in this post I wanted to say something about the differences between a small meeting and a large one.
 
Blurry Twitter pic of the opening session at INTECOL13
(from @Scienceheather used without permission). 
The hall just about sat 2000 people... the tiny bright things at the
front are the people speaking!
The large meeting was INTECOL2013 (see Lindsay Atkinson’s post here about the meeting in general, and check out the twitter hashtag #INT13 for an insight into the range of science AND socialising that goes on at a large meeting).  This was held in a purpose built conference venue in London’s docklands (with air-conditioned lecture rooms!  Luxury!), lasted five days between 19th and 23rd August (with optional trips, registration and a drinks reception on Sunday the 18th), and had around 2000 delegates from 67 countries.
 

View from the speaker's lectern at the small meeting
(BEFORE the talk began!) own photo
The small meeting was entitled ‘Landscape-scale palaeoecology’ and was part of the Crackles Bequest Project.  The meeting part lasted for three days (6th – 8th August), with workshop sessions offering training in the use of data analysis software on Monday 5th and Friday 9th which most of the registrants also attended.  It was held in various rooms in the Cohen Building, where the Geography, Environment and Earth Sciences Department is housed at Hull, and the middle day Wednesday was spent on a bus and visiting a couple of local nature reserves.  We had 34 delegates
attending at least one day, and they came from 11 different countries.

At a small meeting, it’s possible to know everyone by name by the end of the meeting, and there is only one activity scheduled at any one time.  That means that everyone goes to the same talks, so shares the same basis for discussion.  I made a point of trying to chat with everyone at least once during a lunch or coffee break, or evening meal, and I think I just about succeeded.  At the large meeting, it was probably impossible even to see every attendee, since there were usually multiple parallel sessions of talks spread across up to 19 rooms, and people came and went.  Even the ‘plenary’ talks when one big-name speaker was scheduled to speak in the biggest of rooms weren’t attended by everyone; I certainly wasn’t the only person reading the title and abstract, noticing how far from my own areas of interest the talk was, and using that slot to sleep in a little or to meet with someone to talk science or just to wander around the exhibits or sit down and digest what I’d heard so far.  As the week wore on, some people took advantage of the padded benches in the common areas outside the lecture rooms to take naps, and many used the free wifi to check email or otherwise spend time online.  I set myself a modest goal of talking to one new person each day, since I knew I’d want to spend time with colleagues who are also friends who I don’t see much of outside of conferences, and that worked well for me.  I never did really work out the schedule, though...

For the small meeting, we handed out name-badges, the abstract volume (booklet containing the programme and short summaries of each talk and poster, provided by the author(s)) was put together by Michelle and photocopied the week before, and there was also a box of pens for those who needed a writing implement.  The registration fee also included coffee breaks and lunches.  Large meetings cost a lot more than small meetings, but often much of that cost is related to hiring the venue and venue staff such as caterers to serve coffee or security people to check badges and look after luggage.  We were all presented with a nice eco-friendly carrier bag containing an eco-friendly pen (sadly, these are not chewing resistant, and I bust mine within a few hours of starting to use it), a couple of advertising fliers and a professionally printed programme showing all the events and their locations.  Abstracts for individual talks and posters were available on-line or via an ‘app’, but not issued in printed form – given that there were around 1000 talks and several hundred posters, the choice between teeny tiny print and weighing everyone down would have been pretty difficult!  There was also a large exhibition hall containing stands where groups like professional societies, publishers, equipment manufacturers and software suppliers had displays.  In order to attract attention, many of these displays had Free Things to give away, especially pens (I picked up another eco-pen with a barrel made of recycled cardboard, and it was falling apart within an hour.  I’m just not good at pens!  My eco-pencil, made from lunch trays, is working fine).  The British Ecological Society had particularly great freebies, including notebooks, post-it pads, travel card wallets, badges and even keyring torches, all decorated with their logo. 

In terms of social media, we announced that the conference was happening to the rest of the department by email the week before (so that they wouldn’t be too surprised by the group of strangers traipsing from the Earth Science Lab where we had lectures to the Map Room for lunch or round to the centrally booked computer room Cohen-107 for a practical session) and the schedule was shared via a pink-highlighter-adorned notice on my office door each day.  A few tweets mentioned the conference, but it didn’t have its own hash-tag.  INTECOL made much more use of social media, from the earliest stages of advertising the meeting, with regular bulletins emailed round a mailing list and the conference advertised via listservs and different academic societies, to having all the abstracts and the programme (along with travel information and other useful stuff) available via a free app for mobile devices, and even using Twitter as the only medium for asking questions in the plenary sessions with the big-name speakers.  I actually felt rather left out at times, as I don’t currently have a smart phone, and could have done with one during the day at sessions to keep in touch rather than just logging in occasionally via my netbook when I had a table to put it on.  I joke that I don’t have one because I’m a Luddite and my current phone works perfectly well still so why replace it, but part of the reason is that I am very distractible, and I worry that I’ll spend far too much time tweeting and emailing and playing Angry Birds if I have a smart phone.

Both types of conference are enjoyable, and exhausting, and full of good science and new ideas.  Big conferences are good if you are a bit of an intravert and need quiet time to recharge your energy, since it’s very easy to find a space in the programme when you won’t be missed and a place to sit alone.  Big conferences are exciting, there’s no doubt about that, but they can also be rather overwhelming, and unless your interests are finely focused and align with one of the major themes of the meeting you can feel like you are constantly missing out on talks you really want to go to because they clash with something else in the programme.  Small meetings are intense in a different way, since you spend a lot of time with the same people – but since they are all nerdily interested in the same scientific problem, there are always things to talk about.  For me, both are more enjoyable in retrospect, when the hassles of lugging bags across London on the underground on a hot day or of dealing with all the little problems such as printing off e-boarding passes for return flights, booking taxis, and helping people navigate the bizarreness that is British railway pricing policies have faded into the background, and what you remember are the good conversations, the exciting new ideas and the sense of being part of a scientific community.
 
Picture of a crowded session at INTECOL13 (from Simon Harold (@sid_or_simon)'s twitter feed, used without permission)
Embedded image permalink

Wednesday 4 September 2013

Getting Animated

Getting Animated by Chris Skinner (@cloudskinner)

The formal presentation of research in academia is pretty traditional. I doubt it has changed much in the last 500 years, if not longer, and for a progressive sector of society it really does not look set to change. Basically, you get your results, write it up as a paper, some experts look it over and request more details or changes, you do them, they pass it, you get published.

The published article then goes into a journal. Most of these are still printed but are available, usually as a PDF file, electronically. This is where the embrace with the modern world ends. I mainly read articles either on my computer or my tablet – most articles are formatted into two columns on a page which makes it very awkward to read off a screen. So optimisation for electronic presentation is not high on publishers’ agendas it would seem.

But are we missing out? A magazine I have been reading since I picked up my first copy in October 1993 has changed many times in the last two decades. It isn’t a science publication but is related to a hobby of mine, and last year they started publishing a version of the magazine optimised for the iPad. They could have just bunged out a PDF of the paper copy, but they knew that the new technology provided them with a platform to support more content. In place of a photo there is an interactive 360ยบ image, instead of a price list for new products there are hotlinks direct to their entry on the online store, plus there’s additional videos, interviews and zoom panels. If the magazine contains typos or erroneous details, it is automatically updated. The company have started rolling out this idea to their other printed materials.

What if these ideas were used in academia? What sort of content could we include? The most immediate thing that springs to my mind is animations. I produce tonnes of them, and conference presentations aside, they rarely get seen outside of my research group. Why do I make them? Because they are useful for very clearly showing how systems work, if your model is operating how it should or demonstrating patterns in data - (*Thanks to @volcanologist for pointing out that animations can sometimes be submitted, and hosted on a publisher's website).

Take for example some work I have been doing on historic bathymetry data from the Humber estuary. Bathymetry data are readings of water depth at the same tide level, and I use the data to create maps that show the shape and elevation (heights) of the bottom of the estuary. To find out more about what estuaries are, take a look at Sally's previous blog.

Provided by ABPMer, the data spans a period between 1851 and 2003 – I processed the data, calculated rates of elevation change between each sampling period, and from this produced yearly elevation maps. By putting these together as an animation I could see the evolution of the data (it is important here to stress the difference between ‘data’ and reality - not all areas of the estuary were sampled by each survey, and the number and locations of reading varied. Much of the change seen in the video is because of this and not because the Humber has actually, physically, changed in that way).



What immediately struck me was the contrast between the middle and the inner estuary. The middle estuary is the part between the Humber Bridge and the sea, where the estuary’s course deviates southwards – it is remarkably stable over the 150 or so years. The inner estuary, from the Bridge towards Goole, sees lots of internal changes – driven by interactions between the river inputs and the tides – but overall very little change. The Mouth of the Humber, the part closest to the sea, looks to see little overall change, but most of the variations seen in the animation are due to differences in sampling point in the data, and not actual changes. Similarly, changes around the banks of the estuary observed in the animation are most likely caused by sampling difference in the surveys, rather than actual elevation changes.

I have recently been continuing work on adapting a landscape evolution model, Caesar-Lisflood, to model the Humber estuary, and a big step towards this is to accurately model the tides as they are observed by tidal stations recording water depths. Numerically we can do this, but it is important to check that the model is representing the tides in a realistic way - this is a very important step in making a model as it has to be able to accurately simulate observed behaviours before you can experiment with them. Again, animations are a really useful tool for doing this.



The video above shows the variations of water depth throughout several tidal cycles, as modelled, with light blues as shallow and dark purple as deep water. The model changes the depth of the water at the right hand edge in line with water depth data recorded from the Spurn Point tidal station near there. The water then 'flows' from there, down the length of the estuary as the depth increases, and vice versa - this simulates the tides going in and out.

From this I can tell that the model is operating well, as the tide is advancing (coming in/going up/getting deeper) and receding (going out/down/shallower) as expected, throughout the whole region and not just at the points where the tidal stations are located. You'll notice that the early part of the animation shows the estuary filling up with water - this is part of something called 'spin-up', where you let the model run for a period of time to get the conditions right before you start the modelling. In this case it is a 'day' as the water levels gradually builds, filling the estuary.

Another check would be the velocity of the flow as the tide floods and ebbs - this is the speed with which the water is moving (both in or out). The velocity should increase as the tide advances or recedes, but slack water (where the water is hardly moving at all) should be observed at high and low waters. If the model is working as expected, the area of slack water should progress from the sea and up the estuary towards Goole. From the video below, this is seen to be the case. Light blue shows low flow speeds, and darker purples higher flow speeds. The video shows the same modelling procedure as the previous video.



This type of content is really useful to me as a modeller. It is also really useful for presentations as I can show a group of people something that takes a few seconds, yet would probably take a lot of slides and quite a bit of explaining. If academic publications were to begin to include enhanced content in peer-reviewed publications, I believe this could advance the communication of research, not only to other researchers but also to the wider public. For now, Blogs, like the GEES-ology one here, are the best outlet. I hope you enjoyed the animations!