You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
openpower.foundation/content/blog/machine-learning-openpower-...

59 lines
5.3 KiB
Markdown

This file contains invisible Unicode characters!

This file contains invisible Unicode characters that may be processed differently from what appears below. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to reveal hidden characters.

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

---
title: "Hacking Through Machine Learning at the OpenPOWER Developer Congress"
date: "2017-05-02"
categories:
- "blogs"
tags:
- "openpower"
- "featured"
- "machine-learning"
- "openpower-machine-learning-work-group"
- "developers"
- "developer-congress"
- "openpower-developer-congress"
---
By Sumit Gupta, Vice President, IBM Cognitive Systems
10 years ago, every CEO leaned over to his or her CIO and CTO and said, “we got to figure out big data.” Five years ago, they leaned over and said, “we got to figure out cloud.”  This year, every CEO is asking their team to figure out “AI” or artificial intelligence.
IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run.
The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures.
**Take a Hack at the Machine Learning Work Group**
The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimizations. By gathering in a workgroup with a laser focus, people from diverse organizations can better understand and engineer solutions that address similar needs and pain points.
The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem.
**Accelerate Your Machine Learning Solution at the Developer Congress**
This spring, the OpenPOWER Foundation will host the [OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/), a “get your hands dirty” event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you.
The Congress is focused full stack solutions — software, firmware, hardware infrastructure, and tooling. Its a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products.
There has never been another event like this one. Its a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice.
**A Developer Congress Agenda Specific to Machine Learning**
The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes:
- VisionBrain: Deep Learning Development Platform for Computer Vision
- GPU Programming Training, including OpenACC and CUDA
- Inference System for Deep Learning
- Intro to Machine Learning / Deep Learning
- Develop / Port / Optimize on Power Systems and GPUs
- Advanced Optimization
- Spark on Power for Data Science
- Openstack and Database as a Service
- OpenBMC
**Bring Your Laptop and Your Best Ideas**
[The OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty!
**About the author**
[![](images/IBM.png)](https://openpowerfoundation.org/wp-content/uploads/2017/05/IBM.png)Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business.