[concurrency-interest] [Fwd: [EE CS Colloq] Computer Architecture is Back * 4:15PM, Wed Jan 31, 2007 in Gates B01]

Brian Goetz brian at quiotix.com
Fri Jan 26 11:35:16 EST 2007


In case anyone is in the Bay Area next week, this sounds like it should 
be a fascinating talk.


              Stanford EE Computer Systems Colloquium
                  4:15PM, Wednesday, Jan 31, 2007
         HP Auditorium, Gates Computer Science Building B01
                    http://ee380.stanford.edu[1]

Topic:    Computer Architecture is Back
           The Berkeley View of the Parallel Computing Research Landscape

Speaker:  Dave Patterson
           EECS, UC Berkeley

About the talk:

The sequential processor era is now officially over, as the IT
industry has bet its future on multiple processors per chip. The
new trend is doubling the number of cores per chip every two
years instead the regular doubling of uniprocessor performance.
This shift toward increasing parallelism is not a triumphant
stride forward based on breakthroughs in novel software and
architectures for parallelism; instead, this plunge into
parallelism is actually a retreat from even greater challenges
that thwart efficient silicon implementation of traditional
uniprocessor architectures.

A diverse group of University of California at Berkeley
researchers from many backgrounds -- circuit design, computer
architecture, massively parallel computing, computer-aided
design, embedded hardware and software, programming languages,
compilers, scientific programming, and numerical analysis -- met
for nearly two years to discuss parallelism from these many
angles. This talk and a technical report are the result. (See
view.eecs.berkeley.edu)

We concluded that sneaking up on the problem of parallelism the
way industry is planning is likely to fail, and we desperately
need a new solution for parallel hardware and software. Here are
some of our recommendations:

   * The overarching goal should be to make it easy to write programs
     that execute efficiently on highly parallel computing systems

   * The target should be 1000s of cores per chip, as these chips are
     built from processing elements that are the most efficient in
     MIPS (Million Instructions per Second) per watt, MIPS per area of
     silicon, and MIPS per development dollar.

   * Instead of traditional benchmarks, use 13 Dwarfs to design and
     evaluate parallel programming models and architectures. (A dwarf
     is an algorithmic method that captures a pattern of computation
     and communication.)

   * Autotuners should play a larger role than conventional compilers
     in translating parallel programs.

   * To maximize programmer productivity, future programming models
     must be more human-centric than the conventional focus on
     hardware or applications or formalisms.

   * Traditional operating systems will be deconstructed and operating
     system functionality will be orchestrated using libraries and
     virtual machines.

   * To explore the design space rapidly, use system emulators based
     on Field Programmable Gate Arrays that are highly scalable, low
     cost, and flexible. (see ramp.eecs.berkeley.edu)

Now that the IT industry is urgently facing perhaps its greatest
challenge in 50 years, and computer architecture is a necessary
but not sufficient component to any solution, this talk declares
that computer architecture is interesting once again.

About the speaker:

David A. Patterson has been Professor of Computer Science at the
University of California, Berkeley since 1977, after receiving
his all his degrees from UCLA. He is one of the pioneers of both
RISC and RAID. He co-authored five books, including two on
computer architecture with John Hennessy; the fourth edition of
their graduate book was released in September. Past chair of the
Computer Science Department at U.C. Berkeley and the Computing
Research Association (CRA), he was elected President of the
Association for Computing Machinery (ACM) for 2004 to 2006 and
served on the Information Technology Advisory Committee for the
U.S. President (PITAC) from 2003 to 2005.

His work was recognized by education and research awards from ACM
(Karlstrom Educator Award, Fellow) and IEEE (Von Neumann Medal,
Mulligan Educator Medal, Johnson Information Storage Award,
Fellow) and by election to the National Academy of Engineering.
In 2005 he shared Japan's Computer & Communication award with
Hennessy and was named to the Silicon Valley Engineering Hall of
Fame. In 2006 he received the Distinguished Service Award from
CRA and was elected to both the American Academy of Arts and
Sciences and to the National Academy of Sciences.




More information about the Concurrency-interest mailing list