Supersized Science show

Supersized Science

Summary: The Supersized Science podcast highlights research and discoveries nationwide enabled by advanced computing technology and expertise at the Texas Advanced Computing Center of the University of Texas at Austin. TACC science writer Jorge Salazar hosts Supersized Science. Supersized Science is part of the Texas Podcast Network, brought to you by The University of Texas at Austin. Podcasts are produced by faculty members and staffers at UT Austin who work with University Communications to craft content that adheres to journalistic best practices. The University of Texas at Austin offers these podcasts at no charge. Podcasts appearing on the network and this webpage represent the views of the hosts and not of The University of Texas at Austin.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Texas Advanced Computing Center - University of Texas at Austin
  • Copyright: CC BY-NC-SA

Podcasts:

 Podcast - UT President Greg Fenves on Stampede2 Supercomputer | File Type: audio/mpeg | Duration: 3:31

On July 28, 2017 The Texas Advanced Computing Center of the University of Texas at Austin dedicated a new supercomputer called Stampede2. Funded by a 30 million dollar award to TACC from the National Science Foundation, Stampede2 is the most powerful supercomputer at any academic institution in the U.S. Stampede2 will be used during its four-year lifecycle for scientific research and serve as a strategic national resource to provide high-performance computing capabilities to the open science community. TACC Podcast host Jorge Salazar interviewed Greg Fenves, President of UT Austin, to discuss Stampede2 and the importance of supercomputers to the university. Greg Fenves: Stampede2 is a fabulous technology. But technology ultimately comes from people's ideas. And what we've been able to do at the University of Texas and with the Texas Advanced Computing Center is bring some of the smartest people to work with our partners, Dell and Intel, to create fabulous new technology that can then be deployed and is now being deployed to take an unprecedented look at these tough challenges that we face as a society.

 Code @ TACC Wearables Summer Camp | File Type: audio/mpeg | Duration: 5:24

Our technology is becoming more personal and wearable. Everything from fitness trackers, to sleep trackers, to heart rate headphones aim to keep vital information about us at our fingertips. In June of 2017 TACC hosted a summer camp for high school students to learn how to make and program their own custom wearable technology. It's called Code @ TACC Wearables. The Code @ TACC Wearables Camp guided 27 high school students from the Austin area in how to fashion wearable circuits that responded to things like light and temperature and were connected to the Internet of Things. Podcast host Jorge Salazar interviews Joonyee Chuah, Outreach Coordinator at the Texas Advanced Computing Center.  

 A Retrospective Look at the Stampede Supercomputer - Science Highlights | File Type: audio/mpeg | Duration: 20:50

Welcome to a retrospective look at a few of the science highlights of the Stampede supercomputer, one of the most powerful supercomputers in the U.S. for open science research between 2013-2017. Funded by the National Science Foundation and hosted by The University of Texas at Austin, the Stampede system at the Texas Advanced Computing Center achieved nearly 10 quadrillion operations per second. Podcast host Jorge Salazar interviews Peter Couvares, staff scientist at LIGO; University of California Santa Barbara physicist Robert Sugar; and Ming Xue, Professor in the School of Meteorology at the University of Oklahoma and Director of the Center for Analysis and Prediction of Storms. Stampede helped researchers make discoveries across the full spectrum of science, including insight into diseases like cancer and Alzheimer's; the insides of stars and the signals of gravitational waves; natural disaster prediction of hurricanes, earthquakes, and tornados; and more efficient engineering in projects such as designing better rockets and quieter airplanes. Through nearly all of its service, Stampede was ranked in the Top 10 most powerful computers in the world, and it was the flagship system of the National Science Foundation's Office of Advanced Cyberinfrastructure, which provides academic researchers access to technologies and expertise that drive U.S. innovation and open new frontiers for discovery.  

 A Retrospective Look at the Stampede Supercomputer - The Technology | File Type: audio/mpeg | Duration: 17:50

In 2017, the Stampede supercomputer, funded by the National Science Foundation, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing. Podcast host Jorge Salazar interviews Dan Stanzione, Executive Director of the Texas Advanced Computing Center; Bill Barth, Director of High Performance Computing and a Research Scientist at the Texas Advanced Computing Center; and Tommy Minyard, Director of Advanced Computing Systems at the Texas Advanced Computing Center.

 Code @TACC Robotics Camp delivers on self-driving cars | File Type: audio/mpeg | Duration: 8:24

On June 11 through 16 of 2017, TACC hosted a week-long summer camp called Code @TACC Robotics, funded by the Summer STEM Funders Organization under the supervision of the KDK Harmon Foundation. Thirty-four students attended. Five staff scientists at TACC and two guest high school teachers from Dallas and Del Valle also gave the students instruction. The students divided themselves into teams each with specific roles of principal investigator, validation engineer, software developer, and roboticist. They assembled a robotic car from a kit and learned how to program its firmware. The robotic cars had sensors that measured the distance to objects in front, and they could be programmed to respond to that information by stopping or turning or even relaying that information to another car near it. Teams were assigned a final project based on a real-world problem, such as what action to take when cars arrive together at a four-way stop. Podcast host Jorge Salazar interviews Joonyee Chuah, outreach coordinator at the Texas Advanced Computing Center; and Katrina Van Houten, teacher, Del Valle High School.  

 Reaching for the Stormy Cloud with Chameleon | File Type: audio/mpeg | Duration: 12:33

Podcast host Jorge Salazar interviews Xian-He Sun, Distinguished Professor of Computer Science at the Illinois Institute of Technology. What if scientists could realize their dreams with big data? On the one hand you have parallel file systems for number crunching. On the other, you have Hadoop file systems, made for cloud computing with data analytics. The problem is that one doesn't know what the other is doing. You have to copy files from parallel to Hadoop. Doing that is so slow it can turn a supercomputer into a super slow computer. Computer scientists developed in 2015 a way for parallel and Hadoop to talk to each other. It's a cross-platform Hadoop reader called PortHadoop, short for portable Hadoop. The scientist have since improved it, and it's now called PortHadoop-R. It's good enough to start work with real data in the NASA Cloud library project. The data are used for real-time forecasts of hurricanes and other natural disasters; and also for long-term climate prediction. A supercomputer at TACC helped the researchers develop PortHadoop-R. The system is called Chameleon, a cloud testbed funded by the National Science Foundation. Chameleon is a large-scale, reconfigurable environment for cloud computing research co-located at the Texas Advanced Computing Center and also at the University of Chicago. Chameleon allows researchers 'bare-metal access,' the ability to change and adapt the supercomputer's hardware and customize it to improve reliability, security, and performance. Sun's PortHadoop research was funded by the National Science Foundation and the NASA Advanced Information Systems Technology Program (AIST). Feature Story: www.tacc.utexas.edu/-/reaching-for-…-with-chameleon Music Credits: Raro Bueno, Chuzausen freemusicarchive.org/music/Chuzausen/

 When Data's Deep, Dark Places Need to be Illuminated | File Type: audio/mpeg | Duration: 23:42

The World Wide Web is like an iceberg, with most of its data hidden below the surface. There lies the 'deep web,' estimated at 500 times bigger than the 'surface web' that most people see through search engines like Google. A innovative data-intensive supercomputer at TACC called Wrangler is helping researchers get meaningful answers from the hidden data of the public web. Wrangler uses 600 terabytes of flash storage that speedily reads and write files. This lets it fly past bottlenecks with big data that can slow down even the fastest computers. Podcast host Jorge Salazar interviews graduate student Karanjeet Singh; and Chris Mattmann, Chief Architect in the Instrument and Science Data, Systems Section of NASA's Jet Propulsion Laboratory at the California Institute of Technology. Mattmann is also an adjunct Associate Professor of Computer Science at the University of Southern California and a member of the Board of Directors for the Apache Software Foundation.

 How to See Living Machines | File Type: audio/mpeg | Duration: 15:09

Podcast host Jorge Salazar interviews Eva Nogales, Professor in the Department of Molecular and Cellular Biology at UC Berkeley and Senior Faculty Scientist and Howard Hughes Medical Investigator at Lawrence Berkeley National Laboratory; and Ivaylo Ivanov, Associate Professor of in the Department of Chemistry at Georgia State University. Scientists have taken the closest look yet at molecule-sized machinery called the human preinitiation complex. It basically opens up DNA so that genes can be copied and turned into proteins. The science team formed from Northwestern University, Berkeley National Laboratory, Georgia State University, and UC Berkeley. They used a cutting-edge technique called cryo-electron microscopy and combined it with supercomputer analysis. They published their results May of 2016 in the journal Nature. Over 1.4 million 'freeze frames' of the human preinitiation complex, or PIC, were obtained with cryo-electron microscopy. They were initially processed using supercomputers at the National Energy Research Scientific Computing Center. This sifted out background noise and reconstructed three-dimensional density maps that showed details in the shape of the molecule that had never been seen before. Study scientists next built an accurate model that made physical sense of the density maps of PIC. For that they XSEDE, the eXtream Science and Engineering Discovery Environment, funded by the National Science Foundation. Through XSEDE, the Stampede supercomputer at the Texas Advanced Computing Center modeled the human pre initiation complex for this study. Their computational work on molecular machines also includes XSEDE allocations on the Comet supercomputer at the San Diego Supercomputing Center.

 Lori Diachin Highlights Supercomputing Technical Program | File Type: audio/mpeg | Duration: 11:34

  Podcast host Jorge Salazar interviews Lori Diachin of Lawrence Livermore National Laboratory. She's the Director for the Center for Applied Scientific Computing and Research Program Manager and Point of Contact for the Office of Science Advanced Scientific Computing Research organization. She also leads the Frameworks Algorithms and Scalable Technologies for Mathematics (FASTMath) SciDAC center. This year Dr. Diachin was the Chair of the Technical Program at SC16. Right before the conference she spoke by phone to talk about the highlights and some changes happening at SC16. Lori Diachin: I think the most important thing I'd like people to know about SC16 is that it is a great venue for bringing the entire community together, having these conversations about what we're doing now, what the environment looks like now and what it'll look like in five, ten fifteen years. The fact that so many people come to this conference allows you to really see a lot of diversity in the technologies being pursued, in the kinds of applications that are being pursued - from both the U.S. environment and also the international environment. I think that's the most exciting thing that I think about when I think about supercomputing.

 John McCalpin Surveys Memory Bandwidth of Supercomputers | File Type: audio/mpeg | Duration: 24:09

Podcast host Jorge Salazar reports on SC16 in Salt Lake City, the 28th annual International Conference for High Performance Computing, Networking, Storage and Analysis. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.  The podcast interview features John McCalpin, a Research Scientist in the High Performance Computing Group at the Texas Advanced Computing Center and Co-Director of the Advanced Computing Evaluation Laboratory at TACC. Twenty-five years ago as an oceanographer at the University of Delaware, Dr. McCalpin developed the STREAM benchmark. It continues to be widely used as a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple vector kernels. Dr. McCalpin was invited to speak at SC16. His talk is titled, "Memory Bandwidth and System Balance in HPC Systems." John McCalpin: The most important thing for an application-oriented scientist or developer to understand is whether or not their workloads are memory-bandwidth intensive. It turns out that this is not as obvious as one might think… Due to the complexity of the systems and difficulties in understanding hardware performance counters, these can be tricky issues to understand. If you are in an area that tends to use high bandwidth, you can get some advantage from system configurations. It's fairly easy to show, for example if you're running medium-to-high bandwidth codes you don't want to buy the maximum number of cores, the maximum frequency in the processors because those carry quite a premium and don't deliver any more bandwidth than less expensive processors. You can sometimes get additional improvements through algorithmic changes, which in some sense don't look optimal because they involve more arithmetic. But if they don't involve more memory traffic the arithmetic may be very close to free and you may be able to get an improved quality of solution. In the long run, if you need orders of magnitude more bandwidth than is currently available there's a set of technologies that are sometimes referred to as processor in memory - I call it processor at memory - technologies that involves cheaper processors distributed out to adjacent to the memory chips. Processors are cheaper, simpler, lower power. That could allow a significant reduction in cost to build the systems, which allows you to build them a lot bigger and therefore deliver significantly higher memory bandwidth. That's a very revolutionary change.

 Sadasivan Shankar Proposes Co-design 3.0 for Supercomputing | File Type: audio/mpeg | Duration: 21:19

Podcast host Jorge Salazar interviews Sadasivan Shankar, the Margaret and Will Hearst Visiting Lecturer in Computational Science and Engineering at the John A. Paulson School of Engineering and Applied Sciences at Harvard University. Computers hardware speeds have grown exponentially for the past 50 years. We call this Moore's Law. But we haven't seen a Moore's Law for software. That's according to Sadasivan Shankar of Harvard University. He said that the reason for that is a lack of communication and close collaboration between hardware developers and the users trying to solve problems in fields like social networking, cancer modeling, personalized medicine, or designing the next generation battery for electrical storage. Dr. Shankar proposes a new paradigm in which the software applications should be part of the design of new computer architectures. He calls this paradigm Co-Design 3.0. Shankar was invited to speak about it at the SC16 conference. Sadasivan Shankar: We want to see what will make high performance computing personalizable. How can we train the upcoming workforce on different aspects of all the components, the architecture, hardware, algorithms, and software? This is why I think universities play an important role in this, as much as the national labs have been playing on high performance computing. We want to be able to solve the real problems – cancer cures, personalized medicine, new battery materials, new catalysts, eliminate toxic materials. Both research and development are needed to enable this paradigm. Can we essentially do them faster and economically? The computing has brought us very far. But can we take it even farther? That's the question that we should ask ourselves. 

  Kelly Gaither Starts Advanced Computing for Social Change | File Type: audio/mpeg | Duration: 20:51

Podcast host Jorge Salazar interviews Kelly Gaither, Director of Visualization at the Texas Advanced Computing Center (TACC). Gaither is also the Director of Community Engagement and Enrichment for XSEDE, the Extreme Science and Engineering Discovery Environment, funded by the National Science Foundation. XSEDE identified 20 graduate and undergrad students to participate in a week-long event called Advanced Computing for Social Change. The event is hosted by XSEDE, TACC and SC16. The SC16 Social Action Network student cohort will tackle a computing challenge. They will learn how to mine through a variety of data sets such as social media data encompassing a number of years and across large geographic regions. To complete their analysis in a timely fashion they will learn how to organize the large data sets to allow fast queries. The students of the SC16 Social Action Network will also use a computational modeling tool called risk terrain modeling that has been used to predict crime using crime statistics. This technique was first introduced to TACC in work done with the Cook County Hospital in Chicago, Illinois. The work used statistical data to predict child maltreatment in an effort to put programs in place to prevent it. Kelly Gaither: Advanced Computing for social change is an initiative that we started to really use our collective capabilities, here at TACC and more broadly at supercomputing centers across the nation, to work on problems that we know have need for advanced computing. You can think of it as data analysis, data collection, all the way to visualization and everything in between to really work on problems of societal benefit. We want to make a positive change using the skill sets we already have. The SC16 supercomputing conference takes place in Salt Lake City, Utah November 13-18, 2016. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.   

 John West Leads Diversity Efforts in Supercomputing | File Type: audio/mpeg | Duration: 17:05

The SC16 Supercomputing Conference has focused on raising awareness and helping to change attitudes about diversity. That's according to SC16 General Chair John West, Director of Strategic Initiatives at the Texas Advanced Computing Center. West explained that long-term efforts are underway at SC16 to promote diversity in the supercomputing community. These include a new double-blind review of technical papers; a new standing subcommittee focused on diversity and inclusion added to the conference organizing committee; adoption of demographics measurements of the SC16 conference committee and attendees; active recruitment of student volunteers at organizations and universities that serve underrepresented groups; on-site child care; an added official code of conduct; fellowships that promote inclusivity; and continued support of the Women and IT Networking program. John West: For me, (diversity) is a numbers problem. If you look at HPC (high performance computing), more and more communities are adopting advanced computing as a baseline tool for their research. I think a big part of this shift is HPC has been around for a long time. More and more communities are starting to become aware of it. But it's also driven by the success of efforts at TACC and other HPC centers that are pushing this idea of science environments that are front-ended by user-friendly technologies that help flatten the learning curve that we've traditionally had that is a real barrier to new communities of users coming into HPC. That success is driving more people to use HPC. And as we have more users, we're going to have to provide more resources to these folks. We need more highly qualified staff in the provider community, both in the centers themselves and in the organizations that create scientific software that people use… If we're going to try and broaden that talent pipeline to grow the workforce to meet the growing demand, the best opportunity to do that is to try and grab a larger share of that untapped pool of talent. The SC16 supercomputing conference takes place in Salt Lake City, Utah November 13-18, 2016. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.   

 New Hikari Supercomputer Starts Solar HVDC | File Type: audio/mpeg | Duration: 11:01

A new kind of supercomputer system has come online at the Texas Advanced Computing Center. It's called Hikari, which is Japanese for "light." What's new is that Hikari is the first supercomputer in the US to use solar panels and High Voltage Direct Current, or HVDC for its power. Hikari hopes to demonstrate that HVDC works not only for supercomputers, but also for data centers and commercial buildings. The Hikari project is a collaboration headed by NTT Facilities, based out of Japan and with the support of the New Energy and Industrial Technology Development Organization, or NEDO. NTT Facilities partnered with the University of Texas at Austin to begin demonstration tests of the HVDC power feeding system for the Hikari project in late August 2016. What it aims to show is that the high-capacity HVDC power equipment and lithium-ion batteries of Hikari can save 15 percent in energy compared to conventional systems. Podcast host Jorge Salazar discusses the Hikari HVDC project with Toshihiro Hayashi, Assistant Manager in the Engineering Divisions of NTT Facilities, Japan; and Jim Stark, Director of Engineering and Construction for the Electronic Environments Corporation, a Division of NTT Facilities.

 Soybean science blooms with supercomputers | File Type: audio/mpeg | Duration: 11:17

It takes a supercomputer to grow a better soybean. A project called the Soybean Knowledge Base, or SoyKB for short, wants to do just that. Scientists at the University of Missouri-Columbia developed SoyKB. They say they've made SoyKB a publicly-available web resource for all soybean data, from molecular data to field data that includes several analytical tools. SoyKB has grown to be used by thousands of soybean researchers in the U.S. and beyond. They did it with the support of XSEDE, the Extreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The SoyKB team needed XSEDE resources to sequence and analyze the genomes of over a thousand soybean lines using about 370,000 core hours on the Stampede supercomputer at the Texas Advanced Computing Center. They're since moved that work from Stampede to Wrangler, TACC's newest data-intensive system. And they're getting more users onboard with an allocation on XSEDE's Jetstream, a fully configurable cloud environment for science. Host Jorge Salazar interviews Trupti Joshi and Dong Xu of the University of Missouri-Columbia; and Mats Rynge of the University of Southern California.

Comments

Login or signup comment.