Getting Results: BYU-RET Week 4

M67 through a narrow-band hydrogen-alpha filter from my .fits files

M67 through a narrow-band hydrogen-alpha filter from my .fits files

With my ability to use the IRAF and DAOphot software improving, I am finally ready to analyze the data I’m getting out. This was my fourth week at BYU; almost halfway done and just getting to where I feel competent enough to do some decent analysis. The learning curve has been steep.

The Learning Curve of Science:

There is a lengthy learning curve for those that wish to become scientists, and it is this long apprenticeship that discourages too many promising, bright students from entering the profession. Usually one has to achieve an undergraduate degree that involves taking hard classes (differential calculus and quantum mechanics, for example) and having precious little time for anything else. Then there are the masters and doctoral degrees, in which the prospective scientist becomes something like a journeyman – able to do her or his own work under supervision, until granted a PhD, which is the license to practice science. Most scientific specialities require additional experience, so many PhDs go on to post-doc research before finally achieving the level of independence and expertise that will command the respect of their peers.

Adding it all up, that’s about 8-10 years of post-high school education and training. Who wouldn’t be discouraged? It takes a single-minded dedication and commitment that’s hard to maintain (and hard to afford). I have thought about getting a PhD myself, but it seems pretty daunting. I’d have to retake some college classes (especially calculus) and my brain is not as supple as it used to be. I have a family to support, which would be hard to do as a teaching assistant or with a graduate student fellowship. But I also want to do official research on my ideas about science education, and no one will take me seriously until I add a few more titles after my name, no matter how many blog posts I write.

We live in a society that is totally dependent on technology (if you don’t believe this, try living without any electronic technology for a day and see how easy it is. And by the way, that includes driving your car, since cars have computers running the fuel injection system). A vast majority of the population uses technology without understanding how it works or how it is made. They couldn’t recreate it if their lives depended on it. So we live in a technocracy; that is, those who understand and control the technology are those with the real power. Just look at how quickly Congress caved when they tried to pass SOPA in 2012. All it took was Google and Wikipedia protesting for one day, and Congress completely backed off.

To get a .txt file into Excel, you must tell it which row the data starts on, in this case Row 45.

To get a .txt file into Excel, you must tell it which row the data starts on, in this case Row 45.

So here is the crux of the problem: we desperately need more scientists and engineers, but the long process required to train them is unappealing to most high school students. It’s not that they’re not bright enough. They simply don’t see that the rewards are worth the cost. As teachers we aren’t doing a good enough job showing them how profoundly rewarding a life in science can be. Perhaps if science teachers were themselves scientists, they might pass on the excitement of discovery. Better yet, if students could participate in real science as early as high school or even middle school, they might catch the vision of what they could become. That is the purpose of the Research Experiences for Undergraduates (and Teachers) program that I’m part of here at BYU this summer. So that I can tell my students it’s all worth it, and if I can do it, so can they.

This week it all began to become worth it for me as I saw my work yielding results. But before I got to that point, I had to overcome one more hurdle.

The Header of the original .als file, which has 45 rows before the actual data starts.

The Header of the original .als file, which has 45 rows before the actual data starts.

Getting the Data into Excel:

The end result of the lengthy DAOphot procedure was to produce a list of stars, their X and Y coordinates in the .fits file, and their magnitudes adjusted for the seeing conditions and correcting for saturated or overlapping stars. It came out as an .als file. Somehow, in order to compare the results, I had to get it into a spreadsheet.

The second step to get .txt files into Excel is to set the column breaks with tab markers.

The second step to get .txt files into Excel is to set the column breaks with tab markers.

Microsoft Excel can bring in text files as data if numbers are separated by commas or by spaces or tabs. First, I double clicked on the .als file which opened it up in MS Word. I re-saved it as a .txt file from Word, then opened up Excel and chose “File-Open” from within the program. Excel then navigates you through the process of conversion. I had to tell it what row the data began on (most files have headers or column labels). In this case, the actual data begins on Row 45. Then I had to set tab markers for the breaks between the data, making sure to leave enough room so that all the numbers for each field would fit inside the tabs (for example, the star numbers started in single digits, but by the end of the file were in the hundreds, so I had to leave room for at least three digits in that column). Once the tabs were in the right places, the data imported into a raw Excel spreadsheet. But is still needed quite a bit of cleaning up.

What the raw data looks like once it is in Excel. I had to delete the right two columns, then sort the data by Star Number and delete the interlaced rows.

What the raw data looks like once it is in Excel. I had to delete the right two columns, then sort the data by Star Number and delete the interlaced rows.

Cleaning Up the Data:

In the case of .als files, the data came in with about ten fields per record, which would not all fit on one line, so that it wrapped around to a second line for each star record. This had the effect of making two rows for each record, but I only needed the first row. Fortunately, the second row started with a blank cell in each case, so it was a simple matter of selecting all the data and sorting it by the first column (star number), then deleting all the second rows which were now at the bottom of the file. I also deleted two columns of data at the right side that I didn’t need. This left the following fields: Star Number, X-position, Y-position, Magnitude, and Error.

Some stars are too faint to process, so gaps are left in the number sequence. To accurately compare the same stars across filters, the star number must be lined up and the gaps filled with blank rows.

Some stars are too faint to process, so gaps are left in the number sequence. To accurately compare the same stars across filters, the star number must be lined up and the gaps filled with blank rows.

Once final problem had to be fixed: the process of doing photometry with DAOphot identifies a list of stars but some are too faint or too close to the edge of the frame for accurate results, and are rejected from the final calculations. The are saved out as a separate “reject” file. In my spreadsheet, they were shown as gaps in the star numbers. Since I would be comparing the same stars through different filters and at different times of the night, I had to be able to compare a one star with the same star in each field, and that meant filling in the gaps. I scrolled down, looking for discontinuities in the numbers comparing the spreadsheet row number with the star number. When a gap was found, I inserted a new row and filled in the missing number.

First Results: Magnitude Versus Error

The first frames I used DAOphot on were four frames of M67 taken on April 1, 2012. I chose this because it was the first folder on my data drive, but it wound up being a good choice because this is a well-studied open cluster that is quite old, about four billion years. I did two frames taken with a narrow Hydrogen Alpha filter and two frames done with a wide-band Hydrogen Alpha filter.

M67 Magnitudes vs. Error for three fields using a narrow-band H-alpha filter. Low magnitude stars (brighter) are saturated. High magnitude stars are too dim for accurate measurement. Middle magnitude stars with high errors could be something else entirely . . .

M67 Magnitudes vs. Error for three fields using a narrow-band H-alpha filter. Low magnitude stars (brighter) are saturated. High magnitude stars are too dim for accurate measurement. Middle magnitude stars with high errors could be something else entirely . . .

Consulting with Dr. Hintz, he suggested I check to see how good my data was by comparing the star magnitudes with the error. This would give me an idea of at what magnitude the errors became too great, where the stars were too dim to measure accurately. I sorted the data by magnitudes, then created a chart comparing the magnitudes with the errors. The result was the chart shown here. I also did the same comparison with the other frames for the night.

There was an interesting pattern to the data: the very lowest magnitude stars (the brightest ones) had fairly high error, probably because they were too saturated or covered too many pixels for the point spread function to measure their magnitudes accurately. But once the first 5-6 brightest stars were charted, the rest fell into a nice curve that rose gradually for about seven magnitudes before curving more steeply upward and becoming jumbled at the higher magnituds, where the stars were too dim for accurate measurement.

A Detour into Variables:

Not all of the stars fit on this nice curve, however. Some stars had consistently high errors in all fields, which I have shown with the circled dots in the chart. I thought there might be something interesting about these stars, that they might be variable stars, for example, and decided to pursue this further by identifying which stars they were in the .fits file and comparing them with known variables in M67.

I mapped the locations of the stars from the previous chart that had high errors and compared them to known variable stars in M67. There was no correspondence. I probably only discovered some bad pixels in the CCD sensor.

I mapped the locations of the stars from the previous chart that had high errors and compared them to known variable stars in M67. There was no correspondence. I probably only discovered some bad pixels in the CCD sensor.

This detour took me a couple of days to work through. I figured out which stars they were from the spreadsheet (they had the highest errors), then using the X and Y coordinates to determine the exact pixel location in the .fits file, which I had loaded into Adobe Photoshop. I made marks at those locations and drew circles around them and labeled them with the star numbers from the .als file.

I then looked up M67 in SIMBAD, the online astronomic database, and found its list of stars. By their names, I found which ones were variable (V*xxx), then marked them with red circles and names in my evolving Photoshop file. There was no correspondence between the two sets of circles, although some of the yellow circles did enclose actual stars. My conclusion, after this little detour, was that I had actually discovered some bad pixels in the CCD sensor. Perhaps, time permitting, I will look at the two stars I did identify and compare their magnitudes over several days to see is they are actually variables. Or this might be a good project for one of my students this fall.

Even though the results were largely negative, at least my Magnitude vs. Error chart did conform to what Dr. Hintz had drawn for me as the likely shape of the curve. This tells me that my photometry measurements are good and I am finally getting some results after over three weeks of preparation. I can now start to ask questions and pull the answers out of the data.

Advertisements

About davidvblack

I teach courses in multimedia, 3D animation, 8th grade science, chemistry, astronomy, engineering design, STEAM, and computer science at American Academy of Innovation in South Jordan, Utah. Previously, I taught similar courses at Walden School of Liberal Arts in Provo, Utah and Media Design Technology courses at Mountainland Applied Technology College (MATC) in Orem, Utah. I am part of the Teachers for Global Classrooms program through the U.S. Department of State and will be traveling to Indonesia in the summer of 2017 as an education ambassador and global educator. I am passionate about STEAM education (Science, Technology, Engineering, Arts, and Mathematics); science history; photography; graphic design; 3D animation; and video production. My Spaced-Out Classroom blog is for sharing lessons and activities my students have done in astronomy. The Elements Unearthed project will combine my interests to document the discovery, history, sources, uses, mining, refining, and hazards of the chemical elements in the form of audio, video, and written podcasts that all can share and learn from.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s