Introducing the Dingo Console!

Hello everyone! Several years ago, I wanted to make my own Open Source game console that can compete with modern consoles of today. Well, now I’m ready to start building it. Of course, back then I thought a powerful console would be a piece of cake, but now I know better, so this little guy is going to be around the GameCube in performance. But anyway, the reason I mention it here is because 1) to spread the word, and 2) I need all of your input on this. You see, I want this thing to support Java, and on top of that be capable of running JME. But before I go into what I need to do that, here is what the Dingo Console will have:

  • 512 MB DDR 400 RAM (capable of a bandwidth of 12.8 GB/s)
  • Pentium III CPU running at 800MHz
  • 160GB Sata HDD
  • CD drive
  • SD card reader
  • Bluetooth (for wireless controllers only)
  • 4 USB ports (also for controllers)
  • Wifi
  • Custom programmed stereo DSP capable of 16 channels at once, and a lot of cool features (actually, it’s some form of ARM microcontroller programmed to be a DSP)
  • Custom designed GPU and Northbridge programmed onto a FPGA
  • Oh, and did I mention EVERYTHING will be open source, including the GPU? That means no more binary blobs!
  • Documentation on how to hack this little guy. As in, you will be allowed (and encouraged) to modify the bejeezus out of this thing. Well, except for maybe the GPU/NB because you could easily destroy something if you don’t know what you’re doing.
  • Custom made OS and drivers (as if I have a choice)

Jeez, that list formatting was a lot more difficult than I remember it being… Anyway, I want to put a lot of emphasis on two things: The DSP and the GPU/NB. Let’s start with the DSP.

What I want the DSP to do is take in sound data (obviously), and be able to apply lots of cool effects and enhancements to it. It will have 16 channels, and will output in stereo. For input data, I have come up with my own audio compression format that only stores the point of inflections in the sound wave (points where the waveform change direction). I am getting close to getting this format to work in Java, but there are still a few (major) bugs to work out. So until then, it’ll also be able to support WAV formats. Some effects the DSP will be able to do include echo, fade in/out, overdrive (which is like intentional clipping), audio blending, tone shifting (useful for simulating instruments), and a bunch of other stuff that I can’t remember right now.
The DSP will only be able to take in audio at a 48KHz sample rate at 24 bits, but it’ll have two features called Over Sampling Anti-Aliasing (OSAA) and Bit Depth Anti-Aliasing (BDAA). What OSAA does is while the DSP calculates the output waveform, it will attempt to estimate the levels between the two audio samples. This way, the output waveform is smoother and takes less of a blocky formation. OSAA will have three modes: HD (96KHz), UHD (192KHz), and XHD (384KHz).
BDAA is a feature that increases the bit depth from 24 bits to 32 or 64 bits. This leads to more accurate calculations, and results in a smoother waveform. Combine BDAA 64 with XHD OSAA, and you have extremely smooth sound waves (and an overloaded DSP), possibly much more than humans can notice due to the Nyquist theory (I think that’s what it’s called).

Now on to the GPU and Northbridge. I gave it a model number, the D832M. The GPU portion will contain 8 32bit rendering cores, each running at a max frequency of 400MHz, without overclocking. They can set their frequencies to what they need, at a minimum of 100MHz. Each core will have 256KB of shader code cache, and they will all share an 8MB texture cache. The GPU will split the RAM with the rest of the system, although this split isn’t concrete. The game developer has the power to choose the RAM split, in fact the entire memory map, due to the optimized Northbridge architecture. More on that later. The GPU will have two video modes PAL and NTSC, each with two resolutions, 640 x 480 and 720 x 480. Unlike other console GPUs, this one will support shader scripting, although it won’t be OpenGL, but instead my own made up language.
The Northbridge took a long time to get a good design. At first I was only going to do a single internal bus connecting everything, with a fixed memory map. But then that wouldn’t be efficient nor flexible. So in the end, the NB will have a pipelined architecture, and is able to split up bus activity. For example, if the CPU wanted to upload audio data via SPI to the DSP, but the GPU is in the middle of a RAM cycle, then the old design would force the CPU to wait until the GPU finished. But with my new architecture, the CPU won’t need to wait. The new design will allow more than one bus combination to be active at a time, but if a bus port is busy, then the other devices will have to wait. Ie, the CPU and GPU can’t both use the RAM at the same time. Yet. I do want to add support for splitting of the RAM bus itself, so the GPU and CPU can both use it at the same time. But that’s a whole 'nother beast.

Now onto the JME part. My OS will eventually be able to run Java, and then JME. The only issue I have is compatibility. Since this console doesn’t have OpenGL support, I’d have to modify JME to use the what this console has. And then I’d have to port the native libraries to work with this system. So my question is, how hard is it to do that? And is JME even runnable with these specs? It would be really cool to see JME run on this thing, that is, when I can come up with $600 and the time to build and program it. But hey, when I get this thing running, I’d be the first High Schooler to make something like this (as far as I know). So I want your input on this, what do you guys think? And so you don’t waste your time, I already know this is going to be difficult. So you don’t need to tell me.

Hackaday page here (slightly outdated): The Dingo Console |

1 Like

That sounds like really difficult endeavour. But you seem to have thought that thoroughly.
IMO, you could ditch out the CD player… It looks like it would be an indie dev console, and some kind of a niche market (a bit like the Ouya did), and burning CDs is not really in the hype anymore IMO… Also it would cut down the price of the hardware.
You’d better have a solid download portal (steam, google play etc…)

I would give it more RAM actually as512MB sounds like not much. but I can be wrong.

I agree, is the 512MB of RAM shared with the GPU or is there going to be dedicated graphics memory? Why use a custom shader language instead of GLSL?

Oh yeah, that’s right. Well I have so many spare CD drives that I’m trying to make use of them all one way or another. The CD drive will connect to a SATA port, so it’s completely optional. I still burn CDs, so it’s more for my own use. But that’s why I also included an SD card reader! And yes, there will most definitely be a download portal. When I figure out how to get one set up.

I always thought 512MB was more than enough for a game console, considering what other consoles of the time had. The GC had 43MB, the Xbox had 64MB, and the PS2 had 36MB (I just realized this console has much more memory bandwidth than all three of those :chimpanzee_closedlaugh: ). The OS for this console will be designed to take up as little memory as possible (the GC OS took up 256 BYTES of RAM), leaving plenty for whatever else is running. And yes, the RAM is shared with the GPU. And I don’t really want to spend $100 on RAM, 512MB is the most I can afford.

Actually, I could make a cross compiler to translate GLSL to my shader assembly. What I meant was I’m writing my own GPU core assembly instructions.

So is this something you’re hoping to get mass produced or you’re just planning on throwing something together out of spare parts?

Why not just build a mini-PC and install custom Linux on it? I know next to nothing about console but I imagine this would be significantly easier without causing any issues – it would also make it easier to “hack” as the way PCs are put together is already somewhat well known and very well documented. Also, I think I remember hearing somewhere that the xbox was just a PC + Windows Kernel - OpenGL + xbox interface. I also think I remember hearing that the PS4 runs on the BSD kernel. This would also make installing Java easy as you could simply package OpenJDK and/or Oracle’s Linux Java (depending on licensing terms).

This is just something to throw together for the fun of it. I may make it mass produced and have it targeted for the indie community (because what kind of professional gamer would want to use a console that uses ancient technology?), but that’s a whole different topic that can wait. But for now, it’s just a mix of my boredom, trying to clean out my garage without throwing anything away, and just seeing what I can do. Most of the prototype is in fact made of spare parts, mainly the PIII CPU. Hey, might as well use it for something, right?

I could do that, but then what’s the fun in that? The whole point of this project is so I can learn about computer design. Plus it’ll look really REALLY good to colleges. But mostly because I think it’s fun to make things.

I know you’ve already rationalized your reasons for recycling an existing PC…

…but a Raspberry Pi running OpenGL ES would be another fun console base.

1 Like

Been there, did that. I have two Raspberry Pis, and I reserved one for game emulation. Thousands of others did the same thing. I want to do something nobody has ever done before. And again, it’s not that fun if I don’t get to design my own motherboard and OS and all that stuff. And the GPU on the Pi is not open source. Which is another reason for making this console.
Buuuuuut, as a training exercise, I am making an ARM based portable console. It’s not the Pi, but again my own design. But then I already know I can get JMonkey to run on that with a Linux kernel, but I’m more concerned about my bigger console, since Linux probably won’t run on such custom made hardware.

Linux is open source, so you could modify it for compatibility. I don’t know if they would accept it into the main repository but you could certainly run it.

It’s a bit interesting because modern consoles seem to be moving away from custom hardware towards commodity hardware. The PS4 is essentially a standard x86 PC whereas the PS3 was based on IBM SPEs. Nintendo NX (the next Nintendo console) is rumored to be running Android instead of running a custom OS like the Wii U and 3DS did.

This one is really acquire my attention, are you planning to do so from scratch ? or based on previous Open-source trials,
is it feasible to be done as one man show, it is going to be an interesting Verilog/VHDL hacks

As well as Pentium III it is not open source :smiley:

I guess I could do that. It would make my life a lot easier.

And that’s why I hardly ever use a console anymore. In my opinion, companies have ruined the original intention of having a group of friends sit together and play Smash Bros on their N64. That’s another purpose for the Dingo Console, to bring back the old way game consoles were meant to be.

Yes, 100% from scratch. I know a considerable amount of CPU design, and I’m willing to take on the challenge.

What are you talking about? I have the datasheets for the PIII and the GTL+ bus on my computer right now, and no I didn’t pirate them. I don’t have the bus timings, but I’ll figure that part out with a little (okay, a lot) of trial and error. I’ve looked at both of the datasheets, and It shouldn’t be much different than say the 68K, just a handful more signals and wider data and address busses.

(Geez it took awhile to dig this thread up)

I have done a little design change since the original post. The RAM is now going to be 1GB of DDR2 running at (hopefully) 800 MHz. That should be enough to run JME. But with that comes a compromise: In order to keep the board design simple enough, I had to drop the data width to 64 bit, instead of 128. But if I route everything correctly, I should be able to run the RAM at full speed, giving me plenty of memory bandwidth to work with.

Next up is I got more on my GPU architecture planned out, and I have come up with (in my opinion) a very efficient design. Instead of having all 8 cores running at the rising edge of the clock, half of them will operate at rising edge, and the other half at the falling edge. This way the GPU is always doing something. Next up, recall that each core has it’s own code cache. Well that cache will operate at the opposite edge of the core’s clock. So when the core sends a data request to the cache, the cache will process that request when the core settles into the next clock, so the data is ready when that clock cycle begins. The texture cache will do the same thing, but I’ll have to find a way to compensate for the fact that half the cores are running out of phase. Also, due to chip limitations, there will be 2MB of texture cache instead of 4.

The GPU will also have two sets of FPUs. One will be a big FPU that processes less common operations. Then each core will have their own smaller personal FPU to handle more common operations. Both FPUs will be pipelined to allow for high frequencies, and the architecture for the FPU will allow an operation to run multiple times at once. So you could spam the add instruction, and the data will come out in the order you put the arguments in (as long as you don’t overflow the pipeline). The pipeline registers will also load data at the opposite edge of the clock as the core. So when the core clock goes high, the register feeds data to the next stage. When the clock goes low, the register stores data from the previous stage. I will also try to give each core branch predicting and out-of-order execution, making them superscalar.

I have also decided to not lock the GPU speed to 400 MHz, but instead experiment to see how hard I can push the Spartan 6 FPGA before I get graphical glitches.

As for the CPU, I have done some more reading of the datasheet. The pinout is very similar to what I’m used to working with. It has a wait pin, interrupt pins, bus arbitration pins, and all that. The only pin it doesn’t have is a read/write pin. Instead it has 4 request pins, which the datasheet describes them as pins that describe the type of data transfer being performed. Well gee, thanks datasheet. I totally know how to use them now. If I can find an old PIII motherboard and can get my hands on a decent logic analyser, I could reverse engineer the pins.

Well that’s my update. But how about I point you to a different project I’m working one. One much much MUCH easier than this:

1 Like