Last week I visited the Department of Energy’s Oak Ridge National Laboratory where I had the chance to visit the Titan Supercomputer.
ORNL’s computational expertise is built on a foundation of computer science, mathematics, and “big data”—or data science. The projects we undertake run the gamut from basic to applied research, and our ability to efficiently apply the massive computing power available at ORNL across a range of scientific disciplines sets us apart from other computing centers. We have decades of experience in developing applications to support basic science research in areas ranging from chemistry and materials science to fission and fusion, and we apply that expertise to solving problems in a number of other areas.
For example, our experience developing materials science applications has allowed us to build a “virtual” nuclear reactor that our scientists and industrial partners use to optimize current and future reactor designs. Similarly, our computational capabilities enable us to create highly detailed climate models and interactive simulations designed to improve the reliability and efficiency of the nation’s electric grid and transportation infrastructure.
ORNL has a 60-year history of computing stretching from Titan, currently one of the world’s most powerful supercomputers, back to ORACLE (Oak Ridge Automatic Computer and Logical Engine), the fastest computer in the world in 1954. Our experience in providing computational expertise and facilities to the U.S. Department of Energy has given us the in-depth understanding of computer science needed to wring the greatest scientific benefit from every dollar invested in these big machines.
Making the most of these world-class supercomputers requires a dedicated team of computer scientists, mathematicians, and computational scientists. Having a talented, interdisciplinary staff, and the resources to support them, also appeals to potential collaborators and prospective employees seeking broad opportunities for their interests and abilities.
I found this on Pandora a few days ago. I was flying from New York to Knoxville - it was just sort of the right song at the right time.
I’m getting ready to start installing RHEL7 at home. One of the biggest changes is the infrastructure that used to maintain the services (daemon processes) is not rolled into an architecture called “systemD”. Here is a cheat sheet to help deal with the change over
SystemD Cheat Sheet
systemctl Cheat Sheet
I thought I’d add this article on systemd
“The battle over systemd exposes a fundamental gap between the old Unix guard and a new guard of Linux developers and admins”
I’m probably considered an “Old Guard UNIX” guy, and I’m not crazy about the fundamental design of systemd, but I’m smart enough to realize that things change, and if you don’t change with them, you become irrelevant.
DevOps is the blending of tasks performed by a company’s application development and systems operations teams.
The term DevOps is being used in several ways. In its most broad meaning, DevOps is a philosophy or cultural approach that promotes better communication between the two teams as more elements of operations become programmable. In its most narrow interpretation, DevOp is a job description for an employee who possesses the skills to work as a both a developer and a systems engineer. In some industries, the term is also being used to describe a moderator between the two groups who functions as a type of scrum master to help developers and operations teams keep application lifecycle management (ALM) top-of-mind.
Back on July 27th I blogged about beginning to learn the Python language for software development. Here is an update. I’m nearly complete with my Python course on Udemy.com and I’ve started actual work with Python.
So, here’s where I’m at.
I got close to completing the on-line course, but decided to really dive in and start working with it, in favor of going back and finishing the course later on. I have every intention of finishing it, but I wanted to have a little experience first so that I could know what sections I really want to concentrate on. Like most software developers, I tend to reuse code all the time. As a long time Perl programmer, I always had my standard “go-to” functions and libraries, that I’d either import or “cut and paste” pieces of into new code. So, to get into the swing of things, I’ve now written functions to update a MySQL database, send email alerts, parse the output of UNIX commands, do CGI form handling, sys logging, find network errors/dropped packets, automate ssh connections, send pings, etc.
I’m a long way from becoming an expert, and it’s not nearly as second nature as some of the other languages I know, but I’m happy that I’m getting used to the syntax and that I’m starting to do some useful things with it.
Counting down to [SysAdmin Day](http://sysadminday.com/about-sysadmin-day/when-is-sysadmin-day/)