TACC Podcasts show

TACC Podcasts

Summary: The Texas Advanced Computing Center (TACC) is part of the University of Texas at Austin. TACC designs and operates some of the world's most powerful computing resources. The center's mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

Join Now to Subscribe to this Podcast

Podcasts:

 When Data's Deep, Dark Places Need to be Illuminated | File Type: audio/mpeg | Duration: 23:42

The World Wide Web is like an iceberg, with most of its data hidden below the surface. There lies the 'deep web,' estimated at 500 times bigger than the 'surface web' that most people see through search engines like Google. A innovative data-intensive supercomputer at TACC called Wrangler is helping researchers get meaningful answers from the hidden data of the public web. Wrangler uses 600 terabytes of flash storage that speedily reads and write files. This lets it fly past bottlenecks with big data that can slow down even the fastest computers. Podcast host Jorge Salazar interviews graduate student Karanjeet Singh; and Chris Mattmann, Chief Architect in the Instrument and Science Data, Systems Section of NASA's Jet Propulsion Laboratory at the California Institute of Technology. Mattmann is also an adjunct Associate Professor of Computer Science at the University of Southern California and a member of the Board of Directors for the Apache Software Foundation.

 How to See Living Machines | File Type: audio/mpeg | Duration: 15:09

Podcast host Jorge Salazar interviews Eva Nogales, Professor in the Department of Molecular and Cellular Biology at UC Berkeley and Senior Faculty Scientist and Howard Hughes Medical Investigator at Lawrence Berkeley National Laboratory; and Ivaylo Ivanov, Associate Professor of in the Department of Chemistry at Georgia State University. Scientists have taken the closest look yet at molecule-sized machinery called the human preinitiation complex. It basically opens up DNA so that genes can be copied and turned into proteins. The science team formed from Northwestern University, Berkeley National Laboratory, Georgia State University, and UC Berkeley. They used a cutting-edge technique called cryo-electron microscopy and combined it with supercomputer analysis. They published their results May of 2016 in the journal Nature. Over 1.4 million 'freeze frames' of the human preinitiation complex, or PIC, were obtained with cryo-electron microscopy. They were initially processed using supercomputers at the National Energy Research Scientific Computing Center. This sifted out background noise and reconstructed three-dimensional density maps that showed details in the shape of the molecule that had never been seen before. Study scientists next built an accurate model that made physical sense of the density maps of PIC. For that they XSEDE, the eXtream Science and Engineering Discovery Environment, funded by the National Science Foundation. Through XSEDE, the Stampede supercomputer at the Texas Advanced Computing Center modeled the human pre initiation complex for this study. Their computational work on molecular machines also includes XSEDE allocations on the Comet supercomputer at the San Diego Supercomputing Center.

 Lori Diachin Highlights Supercomputing Technical Program | File Type: audio/mpeg | Duration: 11:34

  Podcast host Jorge Salazar interviews Lori Diachin of Lawrence Livermore National Laboratory. She's the Director for the Center for Applied Scientific Computing and Research Program Manager and Point of Contact for the Office of Science Advanced Scientific Computing Research organization. She also leads the Frameworks Algorithms and Scalable Technologies for Mathematics (FASTMath) SciDAC center. This year Dr. Diachin was the Chair of the Technical Program at SC16. Right before the conference she spoke by phone to talk about the highlights and some changes happening at SC16. Lori Diachin: I think the most important thing I'd like people to know about SC16 is that it is a great venue for bringing the entire community together, having these conversations about what we're doing now, what the environment looks like now and what it'll look like in five, ten fifteen years. The fact that so many people come to this conference allows you to really see a lot of diversity in the technologies being pursued, in the kinds of applications that are being pursued - from both the U.S. environment and also the international environment. I think that's the most exciting thing that I think about when I think about supercomputing.

 John McCalpin Surveys Memory Bandwidth of Supercomputers | File Type: audio/mpeg | Duration: 24:09

Podcast host Jorge Salazar reports on SC16 in Salt Lake City, the 28th annual International Conference for High Performance Computing, Networking, Storage and Analysis. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.  The podcast interview features John McCalpin, a Research Scientist in the High Performance Computing Group at the Texas Advanced Computing Center and Co-Director of the Advanced Computing Evaluation Laboratory at TACC. Twenty-five years ago as an oceanographer at the University of Delaware, Dr. McCalpin developed the STREAM benchmark. It continues to be widely used as a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple vector kernels. Dr. McCalpin was invited to speak at SC16. His talk is titled, "Memory Bandwidth and System Balance in HPC Systems." John McCalpin: The most important thing for an application-oriented scientist or developer to understand is whether or not their workloads are memory-bandwidth intensive. It turns out that this is not as obvious as one might think… Due to the complexity of the systems and difficulties in understanding hardware performance counters, these can be tricky issues to understand. If you are in an area that tends to use high bandwidth, you can get some advantage from system configurations. It's fairly easy to show, for example if you're running medium-to-high bandwidth codes you don't want to buy the maximum number of cores, the maximum frequency in the processors because those carry quite a premium and don't deliver any more bandwidth than less expensive processors. You can sometimes get additional improvements through algorithmic changes, which in some sense don't look optimal because they involve more arithmetic. But if they don't involve more memory traffic the arithmetic may be very close to free and you may be able to get an improved quality of solution. In the long run, if you need orders of magnitude more bandwidth than is currently available there's a set of technologies that are sometimes referred to as processor in memory - I call it processor at memory - technologies that involves cheaper processors distributed out to adjacent to the memory chips. Processors are cheaper, simpler, lower power. That could allow a significant reduction in cost to build the systems, which allows you to build them a lot bigger and therefore deliver significantly higher memory bandwidth. That's a very revolutionary change.

 Sadasivan Shankar Proposes Co-design 3.0 for Supercomputing | File Type: audio/mpeg | Duration: 21:19

Podcast host Jorge Salazar interviews Sadasivan Shankar, the Margaret and Will Hearst Visiting Lecturer in Computational Science and Engineering at the John A. Paulson School of Engineering and Applied Sciences at Harvard University. Computers hardware speeds have grown exponentially for the past 50 years. We call this Moore's Law. But we haven't seen a Moore's Law for software. That's according to Sadasivan Shankar of Harvard University. He said that the reason for that is a lack of communication and close collaboration between hardware developers and the users trying to solve problems in fields like social networking, cancer modeling, personalized medicine, or designing the next generation battery for electrical storage. Dr. Shankar proposes a new paradigm in which the software applications should be part of the design of new computer architectures. He calls this paradigm Co-Design 3.0. Shankar was invited to speak about it at the SC16 conference. Sadasivan Shankar: We want to see what will make high performance computing personalizable. How can we train the upcoming workforce on different aspects of all the components, the architecture, hardware, algorithms, and software? This is why I think universities play an important role in this, as much as the national labs have been playing on high performance computing. We want to be able to solve the real problems – cancer cures, personalized medicine, new battery materials, new catalysts, eliminate toxic materials. Both research and development are needed to enable this paradigm. Can we essentially do them faster and economically? The computing has brought us very far. But can we take it even farther? That's the question that we should ask ourselves. 

  Kelly Gaither Starts Advanced Computing for Social Change | File Type: audio/mpeg | Duration: 20:51

Podcast host Jorge Salazar interviews Kelly Gaither, Director of Visualization at the Texas Advanced Computing Center (TACC). Gaither is also the Director of Community Engagement and Enrichment for XSEDE, the Extreme Science and Engineering Discovery Environment, funded by the National Science Foundation. XSEDE identified 20 graduate and undergrad students to participate in a week-long event called Advanced Computing for Social Change. The event is hosted by XSEDE, TACC and SC16. The SC16 Social Action Network student cohort will tackle a computing challenge. They will learn how to mine through a variety of data sets such as social media data encompassing a number of years and across large geographic regions. To complete their analysis in a timely fashion they will learn how to organize the large data sets to allow fast queries. The students of the SC16 Social Action Network will also use a computational modeling tool called risk terrain modeling that has been used to predict crime using crime statistics. This technique was first introduced to TACC in work done with the Cook County Hospital in Chicago, Illinois. The work used statistical data to predict child maltreatment in an effort to put programs in place to prevent it. Kelly Gaither: Advanced Computing for social change is an initiative that we started to really use our collective capabilities, here at TACC and more broadly at supercomputing centers across the nation, to work on problems that we know have need for advanced computing. You can think of it as data analysis, data collection, all the way to visualization and everything in between to really work on problems of societal benefit. We want to make a positive change using the skill sets we already have. The SC16 supercomputing conference takes place in Salt Lake City, Utah November 13-18, 2016. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.   

 John West Leads Diversity Efforts in Supercomputing | File Type: audio/mpeg | Duration: 17:05

The SC16 Supercomputing Conference has focused on raising awareness and helping to change attitudes about diversity. That's according to SC16 General Chair John West, Director of Strategic Initiatives at the Texas Advanced Computing Center. West explained that long-term efforts are underway at SC16 to promote diversity in the supercomputing community. These include a new double-blind review of technical papers; a new standing subcommittee focused on diversity and inclusion added to the conference organizing committee; adoption of demographics measurements of the SC16 conference committee and attendees; active recruitment of student volunteers at organizations and universities that serve underrepresented groups; on-site child care; an added official code of conduct; fellowships that promote inclusivity; and continued support of the Women and IT Networking program. John West: For me, (diversity) is a numbers problem. If you look at HPC (high performance computing), more and more communities are adopting advanced computing as a baseline tool for their research. I think a big part of this shift is HPC has been around for a long time. More and more communities are starting to become aware of it. But it's also driven by the success of efforts at TACC and other HPC centers that are pushing this idea of science environments that are front-ended by user-friendly technologies that help flatten the learning curve that we've traditionally had that is a real barrier to new communities of users coming into HPC. That success is driving more people to use HPC. And as we have more users, we're going to have to provide more resources to these folks. We need more highly qualified staff in the provider community, both in the centers themselves and in the organizations that create scientific software that people use… If we're going to try and broaden that talent pipeline to grow the workforce to meet the growing demand, the best opportunity to do that is to try and grab a larger share of that untapped pool of talent. The SC16 supercomputing conference takes place in Salt Lake City, Utah November 13-18, 2016. The event showcases the latest in supercomputing to advance scientific discovery, research, education and commerce.   

 New Hikari Supercomputer Starts Solar HVDC | File Type: audio/mpeg | Duration: 11:01

A new kind of supercomputer system has come online at the Texas Advanced Computing Center. It's called Hikari, which is Japanese for "light." What's new is that Hikari is the first supercomputer in the US to use solar panels and High Voltage Direct Current, or HVDC for its power. Hikari hopes to demonstrate that HVDC works not only for supercomputers, but also for data centers and commercial buildings. The Hikari project is a collaboration headed by NTT Facilities, based out of Japan and with the support of the New Energy and Industrial Technology Development Organization, or NEDO. NTT Facilities partnered with the University of Texas at Austin to begin demonstration tests of the HVDC power feeding system for the Hikari project in late August 2016. What it aims to show is that the high-capacity HVDC power equipment and lithium-ion batteries of Hikari can save 15 percent in energy compared to conventional systems. Podcast host Jorge Salazar discusses the Hikari HVDC project with Toshihiro Hayashi, Assistant Manager in the Engineering Divisions of NTT Facilities, Japan; and Jim Stark, Director of Engineering and Construction for the Electronic Environments Corporation, a Division of NTT Facilities.

 Soybean science blooms with supercomputers | File Type: audio/mpeg | Duration: 11:17

It takes a supercomputer to grow a better soybean. A project called the Soybean Knowledge Base, or SoyKB for short, wants to do just that. Scientists at the University of Missouri-Columbia developed SoyKB. They say they've made SoyKB a publicly-available web resource for all soybean data, from molecular data to field data that includes several analytical tools. SoyKB has grown to be used by thousands of soybean researchers in the U.S. and beyond. They did it with the support of XSEDE, the Extreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The SoyKB team needed XSEDE resources to sequence and analyze the genomes of over a thousand soybean lines using about 370,000 core hours on the Stampede supercomputer at the Texas Advanced Computing Center. They're since moved that work from Stampede to Wrangler, TACC's newest data-intensive system. And they're getting more users onboard with an allocation on XSEDE's Jetstream, a fully configurable cloud environment for science. Host Jorge Salazar interviews Trupti Joshi and Dong Xu of the University of Missouri-Columbia; and Mats Rynge of the University of Southern California.

 Supercomputers Fire Lasers to Shoot Gamma Ray Beam | File Type: audio/mpeg | Duration: 13:10

Supercomputers might have helped unlock a new way to make controlled beams of gamma rays, according to scientists at the University of Texas at Austin. The simulations done on the Stampede and Lonestar systems at TACC will guide a real experiment later this summer in 2016 with the recently upgraded Texas Petawatt Laser, one of the most powerful in the world. The scientists say the quest for producing gamma rays from non-radioactive materials will advance basic understanding of things like the inside of stars. What's more, gamma rays are used by hospitals to eradicate cancer, image the brain, and they're used to scan cargo containers for terrorist materials. Unfortunately no one has yet been able to produce gamma ray beams from non-radioactive sources. These scientists hope to change that. On the podcast are the three researchers who published their work May of 2016 in the journal Physical Review Letters. Alex Arefiev is a research scientist at the Institute for Fusion Studies and at the Center for High Energy Density Science at UT Austin. Toma Toncian is the assistant director of the Center of High Energy Density Science. And the lead author is David Stark, a scientist at the Los Alamos National Laboratory. Jorge Salazar hosted the podcast.

 UT Chancellor William McRaven on TACC supercomputers - "We need to be the best in the world" | File Type: audio/mpeg | Duration: 6:44

University of Texas System Chancellor William McRaven gave a podcast interview at TACC during a visit for its building expansion dedication and the announcement of a $30 million award from the National Science Foundation for the new Stampede 2 supercomputer system. Chancellor McRaven spoke of his path to lead the UT System of 14 Institutions, the importance of supercomputers to Texans and to the nation, the new Dell Medical School, and more. William McRaven: "Behind all of this magnificent technology are the fantastic faculty, researchers, interns, our corporate partners that are part of this, the National Science Foundation, there are people behind all of the success of the TACC. I think that's the point we can never forget."

 Zika Hackathon Fights Disease with Big Data | File Type: audio/mpeg | Duration: 7:50

On May 15th Austin, Texas held a Zika Hackathon. More than 50 data scientists, engineers, and UT Austin students gathered downtown at the offices of Cloudera, a big data company. They used big data to help fight the spread of Zika. Mosquitos carry and spread the Zika virus, which can cause birth defects and other symptoms like fever. The U.S. Centers for Disease Control is now ramping up collection of data that tracks Zika spread. But big gaps exist in linking different kinds of data, and that makes it tough for experts to predict where it will go next and what to do to prevent it. The Texas Advanced Computing Center provided time on the Wrangler data intensive supercomputer as a virtual workspace for the Zika hackers. Featured on the podcast are Ari Kahn, Texas Advanced Computing Center; and Eddie Garcia, Cloudera. Podcast hosted by Jorge Salazar of TACC.

 Sudden Collapse: Supercomputing Spotlight on Gels | File Type: audio/mpeg | Duration: 11:17

Chemical engineering researcher Roseanna Zia has begun to shed light on the secret world of colloidal gels - liquids dispersed in a solid. Yogurt, shampoo, and Jell-o are just a few examples. Sometimes gels act like liquids, and sometimes they act like a solid. Understanding the theory behind these transitions can translate to real-world applications, such as helping understand why mucus - also a colloidal gel - in the airway of people with cystic fibrosis can thicken, resist flow and possibly threaten life. Roseanna Zia is an Assistant Professor of Chemical and Bimolecular Engineering at Cornell. She led development of the biggest dynamic computer simulations of colloidal gels yet, with over 750,000 particles. The Zia Group used the Stampede supercomputer at TACC through an allocation from XSEDE, the eXtreme Science and Engineering Environment, a single virtual system funded by the National Science Foundation (NSF) that allows scientists to interactively share computing resources, data and expertise. Podcast host Jorge Salazar interviewed Roseanna Zia. Music Credits: Raro Bueno, Chuzausen freemusicarchive.org/music/Chuzausen/

 Docker for Science | File Type: audio/mpeg | Duration: 14:02

Scientists might find a friend in the open source software called Docker. It's a platform that bundles up the loose ends of applications - the software and the dependencies that sustain it - into something fairly light that can run on any system. As more scientists share not only their results but their data and code, Docker is helping them reproduce the computational analysis behind the results. What's more, Docker is one of the main tools used in the Agave API platform, a platform-as-a-service solution for hybrid cloud computing developed at TACC and funded in part by the National Science Foundation. Podcast host Jorge Salazar talks with software developer and researcher Joe Stubbs about using Docker for science. Stubbs is a Research Engineering and Scientist Associate in the Web & Cloud Services group at the Texas Advanced Computing Center.

 Dark Energy of a Million Galaxies | File Type: audio/mpeg | Duration: 12:13

UT Austin astronomer Steven Finkelstein eyes Wrangler supercomputer for HETDEX extragalactic survey, in this interview with host Jorge Salazar. A million galaxies far, far away are predicted to be discovered before the year 2020 thanks to a monumental mapping of the night sky in search of a mysterious force. That's according to scientists working on HETDEX, the Hobby-Eberly Telescope Dark Energy Experiment. They're going to transform the big data from galaxy spectra billions of light-years away into meaningful discoveries with the help of the Wrangler data-intensive supercomputer. "You can imagine that would require an immense amount of computing storage and computing power. It was a natural match for us and TACC to be able to make use of these resources," Steven Finkelstein said. Finkelstein is an assistant professor in the Department of Astronomy at The University of Texas at Austin (UT Austin). He's one of the lead scientists working on HETDEX. "HETDEX is one of the largest galaxy surveys that has ever been done," Finkelstein said. Starting in late 2016, thousands of new galaxies will be detected each night by the Hobby-Eberly Telescope at the McDonald Observatory in West Texas. It'll study them using an instrument called VIRUS, the Visible Integral Field Replicable Unit Spectrograph. VIRUS takes starlight from distant galaxies and splits the light into its component colors like a prism does. Music Credits: Raro Bueno, Chuzausen freemusicarchive.org/music/Chuzausen/

Comments

Login or signup comment.