RedHat Server and ASIC Design Kit

I’ve just spoken to SJM, this afternoon, about the status of the RedHat Servers. It is something that they’re doing very soon and he told me that they should have it all set up in the next couple of days. Hopefully, I’ll be able to start looking at the design kit compatibilities asap.

He mentioned that he might want to try running it on the SuSE system to see if it works on it. If it does (there isn’t a real reason why it should not), then we may just run them off SuSE instead.

I’ve discovered something interesting called the ASIC Design Kit (ADK), which is a generic design kit provided by mentor under their Higher Education Programme. However, I’m not sure if we have access to the ADK as it is not listed directly on either the EUROPRACTICE site nor the RAL site.

From the ADK documentation, the target technologies are AMI 0.5m and 1.2m and TSMC 0.35m, 0.25m and 0.18m. This is good as we have direct access to the AMIS 0.5um backend design kit and we only need one foundry process for teaching purposes. From the ADK documentation, the kit provides:

  • Support for schematic, HDL or mixed schematic/HDL based designs
  • Synthesis support for Leonardo Spectrum
  • Pre-layout timing simulations with QuickSim and ModelSim (VHDL or Verilog)
  • Scan insertion support for DFTAdvisor
  • Automatic test pattern generation support for FastScan or FlexTest
  • Static timing analysis models for SST Velocity
  • Automatic place and route of designs using IC Station
  • Post-layout timing simulations with QuickSim, ModelSim (VHDL/Verilog), Mach TA, or Eldo
  • Support for Design Architect-IC

Therefore, it provides us with everything we need. Hopefully, we can get access to the ADK directly. This will solve our design kit problems forever as it is supported directly by Mentor. Otherwise, I will have to fall-back onto a backup plan for using the AMIS 0.5um design kit, which seems to be the most promising as it has support from front-to-back for the design flow.

Designing with Mentor and ADK

Advertisements

New VLSI Project

I spent some time thinking of a possible new design that might be suitable for the new VLSI project. As the students work in pairs, it would be easiest to get them to design some sort of communications device, with one designing the transmitter and the other the receiver. Then, they can put it all together at the end to see if it works. However, I have to keep in mind that most of these students would have had little exposure to hardware design, much less a VLSI one. This means that the project needs to be suitably easy for noobs, but is also extensible for the experienced few.

For this, I looked at some of the available open source designs on OpenCores and also some of the application notes from Xilinx. There is plenty of information out there, which is good. If we use examples from the Xilinx AppNotes, they come with some basic background info, which can be used for the handout. As it is quite often easier to design the transmitter than the receiver, I thought that we’ll just do this design for them. We can get the students to design the higher level parts of the stack.

DMH wants to keep the ring oscillator as part of the new design. This is simple enough to do as we can just use that as the internal clock source. To make things interesting, we could spec it so that the students either design different clocks for the RX and TX or use different phases of the clock for the RX and TX. Keeping this in mind, I came up with an idea for a simple communication project for the students.

General Project Idea

  • We spec different clocks for the receiver and transmitter that are non divisible by each other (e.g. 2MHz and 3MHz).
  • We provide complete physical layer designs, conforming to a standard protocol (e.g. RS232/SPI/I2C…).
  • We get them to complete the encoder/decoder. We provide them with partial code.
  • We get them to design an error detection scheme. This can be as robust as they want (e.g. Parity/CRC/Hamming…).

Assuming that the toolchain works automagically with the design kits, they can finish the front-end design in under 2 weeks, which leaves another 2 weeks for the back-end work. 2 weeks for the back-end work should be enough if the design kits work properly. The back-end work will involve substituting a single custom logic gate into the design and then putting the whole design through the automated tools.

Working Thru the C++ Labs

I’ve just received a reply from TL with regards to the C++ labs as he had just recently returned from holiday. This is August, the month when everyone except research students go off on holidays. He had promised to send me the relevant handouts/labnotes soon and he has sent them all to me via UMS. I guess that I’ll need to do the following:

  1. Ensure that the GCC toolchains still work.
  2. Ensure that the example code snippets are accurate.
  3. Ensure that the experiment instructions are clear.
  4. Familiarise myself with all the different experiments.

So, I will go through them one by one, before I go off on my break. I’d normally do this for any new experiment that I was introduced to anyway. It’s important to not only go through the motions but to also try to imagine the problems that students might face while doing it. Hopefully, with my experience in programming, it would not be too difficult to get through. I’m a little rusty on C++ as I’ve been mostly programming in low-level C for the last couple of years.

I’m a little confused at the idea of doing a C++ experiment though. I did not learn programming through formal methods. So, I’ve always been partial to the hack-n-slash school of thought. It would be interesting to see how they develop programming concepts through structured experiments. I guess that there will be experiments on assignments, control structures, loops and more interesting stuff like arrays, data structures, pointers and objects.

I look forward to trying them out.

Initial EUROPRACTICE Library Search

I’ve just spent a few hours doing a preliminary search of the available design libraries from EUROPRACTICE through the IMEC website. The following is a quick run down of the available foundry libraries:

  1. UMC – Mentor full custom back-end supported. Highly doubt that the FSC/VST standard cells are usable with Mentor.
  2. AMIS – Standard cells for Synopsys, Mentor back-end. Some unconfirmed successes getting it to work with Mentor.
  3. AMS – Standard cells for Synopsys/Cadence, Mentor back-end.
  4. TSMC – No publicly accessible info

So, this doesn’t bode too well. I might have to look at what standard libraries come with Precision Synthesis. I know that Leonardo Spectrum comes with a built-in standard cell library. But, Leonardo is already being EOL-ed. So, it doesn’t make sense to base a new project on a tool that will be dead soon. But, the bad news with Precision is that, at first glance, the tool is optimised for FPGA implementations and not for ASIC design. So, it might be necessary to use Leonardo after all.

I’ve just come across a potential solution. According to this website, it is possible to use the old AMIS ADK with the newer Mentor ICFlow2005. From a quick check of the Rutherford website, we have access to the older AMIS design kits and also access to the latest ICFlow2005.1 software. So, it might be possible to use a AMIS 0.35 or AMIS 0.50 um design kit standard cell library for designing with Mentor Graphics.

I’ve contacted SJM to find out the status of the Mentor installation on RedHat. He has yet to get back to me on it. Once it is set up, I will investigate the available built-in libraries in Mentor. There isn’t much else that I can do until this is done so that I can check out the tool chains and design kits for AMIS.

Search Acceleration – Introduction

My current PhD project is designing a hardware based search accelerator. Essentially, it will be like a graphics/audio co-processor, that will accelerate search algorithms. The surprising thing that I had encountered when I first started on this project more than 1 year ago, was that there is very little prior-art and literature on this matter. There is a lot of work done on hardware engines that sped up different things that would contribute to search, but there didn’t seem to be a comprehensive look at a new architecture to speed up search algorithms.

So, I thought, why not? We all know that search is an integral part of modern computing. It’s so integral that it is working itself downstream, from the back-end enterprise servers onto a user desktop today. Everyone can and will benefit from search acceleration. Therefore, it makes sense to try to design a processor architecture that has search in mind. There are several characteristics of search architectures that modern microprocessor architectures aren’t handling in the best method.

So, what are the major bottlenecks of running a search algorithm on a general purpose processor? Please keep in mind that I’m not a CompSci student. So, my knowledge in these matters is limited. However, I started my approach by looking at algorithms and data structures. After reading up on these topics, I have found some angles to attack the problem from.

  1. Processor Architecture
    General purpose processors (GPP) aren’t designed to handle algorithms quickly. This isn’t much of a discovery as everyone knows that the GPP isn’t optimised for specific applications but made to be a Jack of all trades. So, we can attack this problem by reducing the processor architecture and changing it into something more suited to handling algorithms.
  2. Memory Architecture
    All algorithms are N-bound operations. In the case of search algorithms, they are bound by the number of records that need to be parsed through. Due to limitations in memory technology, getting at these records may prove to be expensive. The method used to speed up memory access is caching. However, present day caches exploit temporal and spatial locality of reference. In the case of search algorithms, there isn’t much temporal locality as once a record has been parsed through, it is rarely needed again. So, we can attack this problem by designing a new cache architecture that takes structural locality into account.
  3. Search Operations
    Search operations perform some sort of comparison against a key and then do something if it matches the comparison criterion. In GPPs, these are typically implemented as conditional branch instructions. Even with the advances made in branch prediction technology, branching is still an expensive operation, whether in time or transistors. So, we can attack this problem by designing an architecture that reduces branches and makes branching cheap.

So, hopefully, by attacking the problem from these different angles, I would be able to design a processor architecture that is suitable for speeding up search algorithms. I’m not sure how much of a speed up I can hope to obtain. However, I’m hoping that it will more than double the search performance, when compared to a standard processor architecture. As with all my other processor designs, I plan to keep my design elegantly simple, small and fast. This is proving to be the problem.

Re-working the VLSI Project

For my first task as the new Division B TA, I have been asked to rework the 3rd year VLSI design project. Since it is directly related to chip design stuff, I have decided to blog about it here. Although I have been a part of this project for the last 3 years, this time around, we are making a huge design change. Design tools being EOL-ed and a change in department teaching philosophy has necessitated a massive change in the project. So, my basic plan now is to:

  1. Ensure that the design tools are set up correctly on our new server.
  2. Identify the associated design kits and design libraries to be used for the new project.
  3. Bring a simple ring-oscillator design through the design flow to make sure all the critical steps work magically.
  4. Come up with a new design project that encourages group collaboration instead of just bringing a design through the flow.
  5. Work out the specific flow that will result in bringing this design from design entry through to final tape out.

As a start, I have contacted the computer officer in charge of setting up the new server and tools to find out their status. I’ve asked him to let me know (as soon as possible), which tools had been installed so that I can take them for a test drive. I’ve also been looking up the necessary design libraries for Mentor Graphics that we might use. At first glance, there are none being supplied by EUROPRACTICE that might be used. However, I will send EUROPRACTICE an email to ask them if they can recommend me the necessary toolchain and libraries to use.

Hopefully, there will be a suitable library that can be used. Otherwise, it’s going to be a long road to get things working.

Do we need Open Source Hardware?

A recent article at Linux.Com asks a question of whether we need an Open Source Hardware License (OSHL). It wonders if current Open Source Software (OSS) licenses might suffice. There is also an issue raised on whether we actually need Open Source Hardware (OSH) at all. Coming from that particular source, I find it weird that they would even think such things, much less voicing it like that.

There is no doubt that OSS licenses have proven good for software. However, hardware is quite another issue. The article says that most hardware designs are software anyway. VHDL and Verilog are examples of hardware design languages that are semantically similar to software programming languages. This though, points out a big difference between the software and hardware people. There is a slight lack of understanding of hardware by the software people.

Hardware can certainly be designed in VHDL and Verilog. However, that’s only one way of representing hardware. Hardware can also be designed in schematics. Now, you may wonder who still designs stuff in schematics. In this age of multi-million gate designs, how could anyone still design stuff in schematics. The answer is, the hardware people still do. This is done everyday by people working on analogue and mixed-signal designs. There is currently no way of describing analogue designs in a “language” such as VHDL or Verilog. Although both of these languages feature analogue extensions, these extensions are currently only suitable for simulation and not synthesis. Analogue design is still very much an art form. So, a lot of work is still done in drawings. So, such works could by extension, be conveyed under copyright law. That is rightly so, but the OSS licenses aren’t necessarily applicable to drawings.

Derivative works are another issue altogether. In OSS licenses, they usually mention terms for binary or compiled versions of the code. But for hardware, it isn’t exactly compiled into binary form. Some may argue that synthesis is akin to a compilation process. That is true, as synthesis does convert design descriptions into a hardware representation. However, it’s neither binaries nor compiled. It’s called “synthesis”. The OSS license leaves things like that ambiguous and arguable. If it mentions “synthesised” specifically, then there will be less questions on compiled versions of code. So, a OSHL is absolutely necessary if we wish to see better proliferation and uptake of hardware designs.

Then, the whole question on whether we actually need OSH at all, is just silly. Coming from a platform that advocates open source, I cannot imagine such a silly question being asked. They raise a question on whether or not people can hack hardware like they do software. Since not many people can afford to play with hardware (even with FPGA) and fewer still can afford tape-out runs, the article wonders who the license would actually benefit. The answer was given straight by one of the comments, everyone. If hardware was open sourced, there would be very little problem writing OSS drivers for it. There would no longer be a need for reverse engineering to write Linux drivers. The people can just look at the hardware source and write the drivers directly.

We haven’t even gone into the whole idea of community hardware development. Complex pieces of hardware like modern microprocessors have lots of bugs in them. Wouldn’t it be nice if there were many eye-balls looking at the design code to help fix it? I have always thought felt that greater transparency in everything is a good thing for everyone. Let’s stop all these secrets.