University of Melbourne
Advanced Materials Report
Key words α Nucleation, β Titanium Alloy, Strengthening, Alloying elements
Speed Dating Tool- Authorea
Computational Astrophysics: N-Body Exercise
Example: Stardard vs Advanced? We should change the structure of this lab.
CCD REDUCTION AND ANALYSIS [image] Background In this experiment you will analyse data taken at a telescope at Siding Spring Observatory, near Coonabarabran in NSW. You will measure the brightness and colours of the stars in an open cluster, and hence produce your own Hertzsprung-Russell (HR diagram). Using this information, you can work out the size, luminosity and temperature of the stars in this cluster. In the process, you will learn the techniques of CCD data reduction in professional optical astronomy. You will know more about what’s involved in observational astronomy than many professors of theoretical astronomy. CCDs Astronomers don’t look through telescopes any more. In fact, it is anatomically impossible to look through most large telescopes! Starting about 1880, photographic plates replaced the human eye as the most common astronomical instrument. The reasons for this are the same as for using any camera: a permanent record is created and long exposures mean that objects too faint to see with the eye can be recorded in detail. Starting about 1975, electronic detectors began replacing the photographic plate. The reason for the change is efficiency. A high quality astronomical photographic plate will record about 2% of the photons falling on it. The latest research-grade electronic detectors record more than 90% of the photons. They have other advantages too; since they generate digital data you can subtract off background light (a big advantage in a site like central Melbourne), and use all sorts of techniques to sharpen up your image. Most important of all, electronic detectors allow you to quantitatively measure the radiation coming from sources because they are linear devices— the number you get out of a CCD is directly proportional to the number of incident photons. At their most basic, CCDs are simply photon counting devices. If you know how many photons an object has emitted, then you know how much energy it has emitted and you are on your way to understanding what it is/how it works. CCDs are basically silicon chips, with a very special circuit pattern etched on their surfaces. When photons hit the surface they generate electron-hole pairs. The electrons are attracted to a grid of electrodes etched into the chip. Each of these electrodes forms a potential well, and it traps and stores all the electrons produced in its part of the chip. So as time goes on, a pattern of charge builds up near these electrodes. The more light falls on a given part of the chip, the more electrons will accumulate near the electrodes there. The pattern of electrons building up on the chip is thus an image of whatever the CCD is looking at. When the observations are finished, the pattern of electrons on the grid of electrodes has to be measured and stored on computer. To do this, one end of the chip is connected to a set of wires. Under computer control, the voltages on the electrodes are set oscillating, so that a set of ripples of potential move across the chip. All the electrons are swept along by the ripples, moving to the edge of the chip where they are ’counted’ (the potential they generate is measured) and the result is sent to the computer. CCDs are now the dominant technology used in astronomy. The same technology is now widespread in devices such as video recorders and digital cameras. They are relatively cheap and reliable. It won’t hurt them if you expose them to light accidentally. And they can be very efficient; most astronomy grade CCDs can collect and store an incredible 99% of the photons that fall on them. With a CCD attached, a little telescope on the roof of the School of Physics is as powerful as the biggest telescope in the world would be using naked eye observations. The Hertzsprung-Russell Diagram Stars, like people, come in many shapes and sizes. Massive blue stars burn furiously and die young (like pop singers). Boring stars like our sun chug along for billions of years emitting yellow light before they run out of fuel, swell up into cool bloated supergiants and finally splutter to an end as piddling little white dwarfs. Most stars are quite well understood (almost anything looks easy to understand when it is as distant as a typical star!), but people keep finding bizarre stars that break the rules. In this exercise you will look at a cluster of stars, and measure their properties. You will use the available observations to figure out how bright they are, how hot, how large and more. In particular, you will plot your own Hertzsprung-Russell diagram, the famous plot of colour against brightness that allows you to classify stars into their myriad different types. A Brief Summary of What You Should Do 1. Familiarise yourself with the observational data provided on the linux computer and process it using the IRAF (Image Reduction and Analysis Facility) package. 2. Measure the brightness of all the stars in your processed data at several different wavelengths. 3. Plot the HR diagram and work out the properties of your stars. The Telescope This data was taken by Dr Alicia Oshlack (a former Ph.D. student in the astro group) using the 40 inch telescope owned by Mount Stromlo and Siding Spring Observatories. This telescope is able to be used by anyone, provided they submit a proposal detailing their proposed observations and giving a scientific justification for the use. This is essentially the same process used for all major astronomical telescopes across the world. This telescope has now been decommissioned, and is no longer used for astronomical research. In order to take observations, astronomers have to submit a proposal detailing their proposed observations and giving a scientific justification for the use. This is essentially the same process used for all major astronomical telescopes across the world. The Data If you look at a random bunch of stars, you get a very messy HR diagram. The reason is the stars are all at different distances. Faint red nearby stars may appear brighter than bright blue background stars, just because they are closer. For this reason, we will look at a cluster of stars. This ensures that nearly everything we see is at roughly the same distance (though we might accidentally get an occasional star in the foreground or background). All the stars in a cluster form at once, so everything we observe will have the same age. There is also a third advantage; because stars are crowded together in a cluster, we can get images of lots of stars at once. Clusters come in two types, open clusters (relatively nearby, and containing up to a few hundred stars) and globular clusters (more distant, and containing perhaps a million stars). Unfortunately globular clusters are so distant that with our telescope all the stars will seem to blur together. We will therefore observe open clusters. Table [tab:clusters] lists several bright open clusters that can be seen from southern Australia around autumn. However, we have multi-colour images of only one of these clusters – M93 – and so this will be the one you will analyse.  Hence, the raw CCD output is usually expressed in units of ’counts’. Typically, this is just an integer between 0 and 2¹⁶=65536 because the electronics on the chips utilise analog-to-digital converters with 16-bit resolution.
Ozone Paper Moved Offline
We develop a quantitative method for determining Stratosphere to Troposphere Transport events (STTs) and a minimum bound for this transported ozone quantity using ozonesondes over Melbourne, Macquarie Island, and Davis.
The Victorian Earthquake Hazard Map
SUMMARY This report summarises the development of a new Probabilistic Seismic Hazard Analysis (PSHA) for Victoria called the Victorian Earthquake Hazard Map (VEHM). PSHA provides forecasts of the strength of shaking in any given time (return period). The primary inputs are historical seismicity catalogues, paleoseismic (active fault) data, and ground-motion prediction equations. A key component in the development of the Victorian Earthquake Hazard Map was the integration of new geophysics data derived from deployments of Australian Geophysical Observing System seismometers in Victoria with a variety of publicly available datasets including seismicity catalogues, geophysical imagery and geological mapping. This has resulted in the development of a new dataset that constrains the models presented in the VEHM and is also is provided as a stand-alone resource for both reference and future analysis. The VEHM provides a Victorian-focussed earthquake hazard estimation tool that offers an alternative to the nationally focussed 2012 Australian Earthquake Hazard Map . The major difference between the two maps is the inclusion of active fault location and slip estimates in the VEHM. There is a significant difference in hazard estimation between the two maps (even without including fault-related seismicity) due primarily to differences in seismicity-analysis. These issues are described in the discussion section of this report, again resulting in a higher fidelity result in the VEHM. These differences make the VEHM a more conservative hazard model. The VEHM currently exists as a series of online resources to help assist those in engineering, planning, disaster management. This is a dynamic dataset and the inputs will continue to be refined as new constraints are included and the map is made compatible with the Global Earthquake Model (GEM) software, due for release in late 2014. The VEHM was funded through the Natural Disaster Resilience Grants Scheme. The NDRGS is a grant program funded by the Commonwealth Attorney-General’s Department under the National Partnership Agreement on Natural Disaster Resilience signed by the Prime Minister and Premier. The purpose of the National Partnership Agreement is to contribute towards implementation of the National Strategy for Disaster Resilience, supporting projects leading to the following outcomes: 1. reduced risk from the impact of disasters and 2. appropriate emergency management, including volunteer, capability and capacity consistent with the State’s risk profile.
HealthHack: the musical
Imagine an event generating multiple innovations in medical research in one weekend. Imagine an event giving spreadsheet-limited laboratory researchers a taste of what computers can really do. Imagine an event where the powers of coding and analysis are combined with powers of biology, genetics and medicine to convert dry or unobtainable data into ground-breaking medical insight. And imagine it all done in a spirit of frenzied teamwork and friendly competition.
This is HealthHack, a 48 hour hackfest in which medical researchers nucleate teams of brilliant, creative people to solve the data-processing problems faced by real researchers and real clinicians.
The weekend opens on Friday evening with the problem owners pitching their ideas, problem and proposed solution, to the assembled hackers, followed by an hour in which hackers can approach problem owners with their questions and seek additional details. Hackers then choose which problem they are going to work on, and the assembled teams plan their assault and commence work on their application.
Apart from going home (presumably) to sleep, the event runs all weekend. Sunday 4pm is down tools, and the teams then present their prototypes to everyone, with prizes for first place, second place, and spirit of HealthHack.
Below is a sampling of the teams and their creations: