FinFETs (transistors where the channel gets pinched off from both sides) sound like pretty slick design solutions, and several companies and many universities are studying them. But this IBM researcher predicts that they aren't manufacturable. Instead, industry needs to focus on advanced materials for the traditional "planar" transistor technology.
Wednesday, December 07, 2005
Wednesday, November 30, 2005
The author, Shankar Krishnamoorthy of Sierra Design Automation, describes the very typical timing analysis and design closure flow, where many different modes and PVT corners must be considered, yet closure tools only understand one or two of the scenarios.
The rigorous treatment of design variation, classified in a matrix of variablility "causes" vs. "effects", is the best treatment I've seen in a public article. (EDA vendors will show similar analyses when privately pitching their new variation-aware products.)
Surprisingly, the author does not conclude that "statistical" timing analysis, which is all the rage in the EDA community, is a panacea. He points out the difficulties of getting statistical characterization data for process, libraries, and interconnect. He also asserts that Hold time violations require analysis at whichever corner most aggravates a particular violation (calling for multi-corner analysis).
Very thought provoking! It will be interesting to see what products Sierra comes up with to address this growing IC design closure problem.
Monday, November 28, 2005
Most of the chips have Microsoft's label on the package. At least in the case of the GPU, I believe that Microsoft bought the design from ATI and is paying royalties. I've read that this is a different business model from that of the original Xbox. Microsoft apparently wants more control over the silicon and any cost reduction efforts.
Wednesday, November 23, 2005
Looking just at the chips, this machine has some expensive silicon:
Semiconductors alone account for $340, -- more than 72% of the materials cost -- iSuppli estimates. One key component, the IBM-designed microprocessor chip at the center of the console (see BW Online, 10/25/05, "Inside IBM's Xbox Chip") costs about $106. Both IBM and Chartered Semiconductor (CHRT) of Singapore are building the chip for Microsoft.
ROOM FOR IMPROVEMENT. Analyst Chris Crotty with iSuppli says that as both companies improve their manufacturing efficiency and production yields, they will likely reduce the chip's cost by 20% to 25%. The same will likely apply to ATI (ATYT), which is building the graphics-processing unit, or GPU, for the Xbox. iSuppli estimates that the chip is the most expensive component in the system at $141.
And how about the cost of the Sony PlayStation 3, to be released next year?
Crotty expects that Sony's loss on the Playstation 3 may be even wider, as the cell processor that IBM, Toshiba, and Sony designed for the system is more complex.
Estimates vary as to how much the cell processor will cost. Richard Doherty of Envisioneering Group in Seaford, N.Y., expects the cell chip to cost about 50% more than the Microsoft chip. "Based on what we've seen so far, the Playstation 3 could cost as much as $600 to make in today's pricing," Doherty says.
And Crotty says that since it's a more complex chip, its price will fall more slowly than the price on the Xbox chip.
Ouch! No wonder the games cost $60 per title, to make up for the hardware losses.
Monday, November 21, 2005
I enjoy this blog by Sramana Mitra, who is a technology and business prognosticator.
His Her blog both covers the big picture and contains nuggets of individual business/career opportunities, such as
Configuring and managing Home Networks will be a big profession, and this job cannot be outsourced as easily. A largely non-tech savvy consumer population will demand that service personnel come out to their homes and fix things. Who pays? Consumer or Carrier? Without this support, Convergence will not cross the chasm.
11/22 UPDATE: A loyal reader (from India) pointed out that Sramana is a woman, not a man. My apologies! I've corrected my wording.
Tuesday, November 15, 2005
I'm sorry I missed ICCAD; it's right here in Silicon Valley. I need to get on the right mailing list to hear about such things. It's not as practical a conference as SNUG, but it's a good resource for a look at future design technology.
Monday, November 14, 2005
For a long time (since even I was a college student), US Engineering graduate schools have been populated primarily by foreign-born students, particularly from China, Taiwan, and India.
It used to be that such student would typically stay in the US to work. This was controversial, requiring H1B visa, and some allegations of taking American jobs or depressing wages.
Today, these students aren't necessarily sticking around. They're going back to work in their home countries, where they may be more comfortable or sense more opportunity. An example of this is described in EETimes.com - Alarming export: engineers.
Now these very bright students are taking their knowledge and work ethic back home. It's probably a net loss to the US to not have them sticking around. It shows you should be careful what you wish for!
Friday, November 11, 2005
It's an interesting list. Some of it makes sense, some doesn't. It would be best for me to look into the Analog/Mixed-Signal (AMS) field, since I'm not qualified to fill the African-American or Female Technicals openings.
Tuesday, November 01, 2005
There's always tension between wanting a "one-stop-shop" with a highly integrated environment and flow, and wanting "best of breed" tools that produce the best, fastest results. Best of breed usually wins. Getting to an integrated environment? That's what CAD departments are for. ;-)
Tuesday, October 25, 2005
A Census of 338 Engineers on Design Verification Tool Use
is chock-full of EDA data painstakingly compiled by John Cooley.
It's nicely organized, and the summary at the top of each subject is well worth a read. As you get further into each article, there are a lot of one-line responses that aren't very informative, though you can get a feel for what other companies are doing.
Wednesday, September 21, 2005
The power reduction is through a three-pronged attack:
- Increasing treshold voltage. This is familiar to all nanometer-scale digital IC designers today.
- Low Damage Junction Engineering. Uh, that's some real process engineering, and I don't have much insight.
- Increased Gate Oxide Thickness. This is very interesting and counter to scaling and performance trends. But, when the gate oxide is only 3-5 atoms thick, you have to question "how thin is too thin?". The article just mentions making the oxide thicker. Presumably this is less risky than utilizing often-mentioned but not-in-production High-K Dielectrics.
Tuesday, September 20, 2005
I'm very intrigued by this idea. It has the promise of removing lots of overhead in chips for clock distribution and balancing. Not only does all the synchronous clock overhead use a significant amity of chip area, but it uses up a large amount of the standard cell power consumption! Every one of those clock buffers toggles twice per period, giving it a "toggle factor" of 2X.
I don't recall asynchronous logic design being taught in any of my engineering classes. There need to be more resources to learn about it.
Another big roadblock is that there is not the EDA infrastructure for asynchronous design. All the commercial synthesis, timing, and test tools assume synchronous design.
Friday, September 16, 2005
Wednesday, September 07, 2005
I agree that SystemVerilog looks to be a better way to design and verify. It's more productive, removes some Verilog ambiguity, and has better support for formal verification.
I don't know the prospects for OpenAccess (hope it's not another CAD Framework Initiative or CHDSTD), but hope it succeeds. As Richard says, why should every startup (and every corporate CAD department) have to redevelop the infrastructure for EDA tools?
Don't know much about SystemC. Let's start using SystemVerilog first.
Monday, August 29, 2005
The Intel Development Forum got tons of press. Intel is running away from the "most gigahertz" school of CPU design because of the stifling problems of power management. I'll link the best articles with IDF coverage here:
Update: this article in BYTE magazine (subscription required) describes how Intel really wants programmers to write parallel code.
This EE Times report from the Hot Chips conference, NVIDIA scientist calls for expanded research into parallelism, raises one of the "dirty little secrets" of all the hype about multi-core CPUs -- it is hard to make applications multi-threaded! Do we already have a good programming language for describing an application's parallelism, or is a new language needed?
Meanwhile, those working in the graphics arena are ideally suited to taking advantage of Moore's Law:
Kirk contrasted this situation against the entirely different structure inside the GPU. "Graphics has been called embarrassingly parallel," he said. "In effect, each stage in our pipeline, each vertex in the scene and each pixel in the image is independent. And we have put a lot of effort into not concealing this parallelism with programming."
This allows a GPU developer to simply add more vertex processors and shading engines to handle more vertices and more pixels in parallel, as process technology allows. "We are limited by chip area, not by parallelism," Kirk observed.
Thursday, August 25, 2005
Each of these contains more transistors than an AMD Atlon64 CPU. There is a lot of power (both computing and electrical) in these chips!
Tuesday, August 23, 2005
I like the article more for this history than any insight I have into their new router's prospects. Also very impressive is the amount of money that Cadence paid to acquire CCT. It would be really interesting to do an analysis of what EDA vendors have paid for acquisition, and what returns they got. How about Cadence's acquisitions of Ambit and HLD Systems? HLD in particular didn't see to last long.
Thursday, August 18, 2005
Tuesday, August 16, 2005
Constructing the next transistor , in EE Times, is a superb article describing the technical challenges to building useful transistors below 65nm. It covers the physical and materials challenges that are arising, and how the solution to one problem (e.g., low leakage power) may aggravate another (high performance).
I'd love to have seen this illustrated with some pictures -- I wonder if the print edition has that? In any case, if you're a semiconductor engineer or scientist, you'll find this article worthwhile.
Why, it even makes me want to dust off my semiconductor textbooks to remember what's really going on inside a MOS transistor!
Tuesday, August 02, 2005
I admit I'm not a financial guru, but trounces seems like a strong word for "Cadence recognized net income of just $500,000, or $0.00 per share"!
UPDATE: Synopsys will announce their 3Q2005 earnings on August 17.
- Cadence Trounces Guidance Again
- Magma Reports Fiscal Q1 Sales Up 8% The fourth largest EDA tool supplier reported record high revenue of nearly $39 million for the fiscal quarter ended July 3.
- Mentor Q2 Sales, Earnings Fall Due to lower than expected bookings, the third largest EDA player saw both revenue and earnings per share fall, but remains positive for future growth for new and emerging products.
Thursday, July 28, 2005
Wednesday, July 27, 2005
I'm very curious about how front-end or RTL design can influence manufacturability or yield in a Standard Cell flow. I don't see the connection.
Improving yield in RTL-to-GDSII flows sounds like it would explain it all for me, but the connection to RTL design still seems to be missing. I see where synthesis might be able to select cells out of a "high yield" library (if such a beast existed), and certainly there are things to do during routing, such as adding redundant vias. But this is all physical design. Is the logic designer off the hook?
- Signal Integrity
- Visualizing the behavior of Logic Synthesis algorithms
Some of the other papers are more oriented to Magma tool capabilities, but because their tool suite is modern and novel, they should be informative as well.
Wednesday, July 20, 2005
Monday, July 18, 2005
Interoperability plays a key role in elevating designers to a higher level of productivity, just as Tenzing Norgay's efforts facilitated the first ascent to the top of Mount Everest, said Rich Goldman, vice president, Strategic Alliance at Synopsys.
Ah, it all makes sense now. ;-)
- Managing power
- Managing the memory bandwidth bottleneck
Here for the foreseeable future is a world of parallelism, of increasingly application-directed architectures and of an unending struggle for memory bandwidth rather than Mips.
Monday, July 11, 2005
- How do they know what IP to diffuse onto the base layers? How do you reconcile if one customer wants N Serdes macros, and another wants M megabits of RAM?
- Who is really using Structured ASIC? How real is it, and for what applications?
Thursday, July 07, 2005
- The argument for "scaling out" is similar to NVIDIA's SLI strategy, which links two GPUs to work in parallel on graphics rendering.
Friday, July 01, 2005
Thursday, June 30, 2005
Monday, June 27, 2005
- Magma's Quart-DRC high-performance distributed DRC.
- Apache PsiWinder critical-path and clock tree analysis tool that considers crosstalk and dynamic-power integrity effects.
- Sierra Pinnacle physical-synthesis tool to concurrently optimize timing, area, power and signal integrity across all operating modes and corners.