The screen and the CPU aren't connected yet, I'm going to make a sort of GPU thing to connect the two together. It currently is 32x24 pixels and can display 32 characters on it.
It used to be 64x30, but the save got corrupted, and it was lagging heaps, so I copied stuff over to a new world and started off making it smaller. I'll see about whether I'll make it any bigger.
The screen is controlled by an 8-bit address line, an 8-bit set line and an 8-bit reset line. The screen is made up of a whole heap of groups of 8 pixels, like this:
[] [] [] []
[] [] [] []
The groups are all stuck together to make the screen. To write to the screen, a group of 8 pixels is chosen using the address lines, and pixels can either be turned on or off using the set and reset lines.
EDIT: I put a save up containing the screen here: http://site.lazcraft.info/downloads.php#screen
It isn't connected to the GPU or CPU, so you can manually set and reset pixels but thats about it for now. I'm just putting it up for people to check out if they're interested, since I'm taking so long to hook it up to the CPU. There's a quick tour of it here: http://lazcraft.info/post/3400838007
Should be able to get the save uploaded soon, I just want to make sure it's all working nicely, put some signs in and stuff like that. Then I'll start working on building the GPU, which'll do all the hard work. :smile.gif:
The screen is controlled by an 8-bit address line, an 8-bit set line and an 8-bit reset line. The screen is made up of a whole heap of groups of 8 pixels, ...
So there is 1 control line for each position on the screen, hence there isn't a multiplex to decode the address yet? And how many pre-set characters have built-in in the screen ROM? every letters and numbers? what about the decoder? like an ASCII code? or it can be manually set for each pixel, instead of a group of pixels? I think the GPU need to handle at least multiplexing address lines and a memory buffer?
P.S I've testing a small scale keyboard prototype using pressure pats. and it clearly need an encoder that uses the same coding scheme as the screen decoder, other wise they won't be compatible. We need at least a "character" to "hex" table first. (Unicode is complicated how about just a part of ANSII code? "H" is Ox48, 8-bit for each char)
The screen is controlled by an 8-bit address line, an 8-bit set line and an 8-bit reset line. The screen is made up of a whole heap of groups of 8 pixels, ...
So there is 1 control line for each position on the screen, hence there isn't a multiplex to decode the address yet? And how many pre-set characters have built-in in the screen ROM? every letters and numbers? what about the decoder? like an ASCII code? or it can be manually set for each pixel, instead of a group of pixels? I think the GPU need to handle at least multiplexing address lines and a memory buffer?
P.S I've testing a small scale keyboard prototype using pressure pats. and it clearly need an encoder that uses the same coding scheme as the screen decoder, other wise they won't be compatible. We need at least a "character" to "hex" table first. (Unicode is complicated how about just a part of ANSII code? "H" is Ox48, 8-bit for each char)
No, the control lines are shared. Each group of 8 pixels corresponds to a single address from the address byte. If, for example, you wanted to turn on all the pixels in the group at address 00000001, you'd set the address to 00000001, then send 11111111 through the set line. If you wanted to turn off half of those pixels, you'd send 11110000 through the reset line. It means that each individual pixel can be turned on or off, but you can also write to multiple pixels at once. I'll put instructions up here when I upload the save. :smile.gif:
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
No, the control lines are shared. Each group of 8 pixels corresponds to a single address from the address byte. If, for example, you wanted to turn on all the pixels in the group at address 00000001, you'd set the address to 00000001, then send 11111111 through the set line. If you wanted to turn off half of those pixels, you'd send 11110000 through the reset line. It means that each individual pixel can be turned on or off, but you can also write to multiple pixels at once. I'll put instructions up here when I upload the save. :smile.gif:
OK, That means each "address+set/reset" control signals can be seen as a longer 16-bit address line? so it is possible to turn off and on each pixel? since each character (pixels group is 6x4 =24 bit each) is larger than 8 bit wide, hence the position of each pixel is already encoded as an address?
Quote from the1laz »
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
The development of character sets is chaotic and have been modified over and over many many times since the first display being available in the 70s. I think you can get a full picture to see the wiki page characters encoding.
But simply put, from the very early ANCII encoding that technically require only 7-bit to encode 128 characters, to later IBM code page system like CP437, and Windows CP1252, (8-bit or 16-bit encoding) till modern Unicode (dynamic in bit-wise), each are just a different way of translating binary signal to a finite set of characters.
The earliest ANCII can be trace back to its predecessor of telegram encoding, translate clicks into characters, and they are not bound by the number of 2^n. And when it is adapted to binary encoding, the first bit of 8-bit is then been used as a flag for indicating characters sets that is longer than 8-bit. But the core remain, and the first 128 character sets in all (almost?) encoding is mostly the same as ANCII.
Not all of the 128 slots in the 7-bit is used to display characters, 33 of them are used for control signals, like delete the previous character, end of line, or even ring a bell (no kidding you can still do this to computer alarms).
And when the first "Windows" (it was called Text Box") comes to play in the early text-based display, there are several characters in the set were used as edges and background block. Hence we now have ANSII art that used this special characters and create picture-like texts. Which are used for some early computer games.
-------------------------- Warning! The following is for education purpose only --------------------------
But there are some obvious problems, like not every country used the same alphabet (some even don't used alphabet at all). 128 slots or 256 slots are definitely not enough. Hence when dealing with foreign characters you need a higher byte and lower byte to form a larger 16-bit character sets, and it will hold 65536 different characters. (DBSC)
Sixty thousands sounds large enough? No, not even close. There are more than 30,000 common characters in Chinese alone, and probably 500,000 characters in total, and it's just one language (Although an unusual example). So the idea of using a code book like a dictionary for each different language is developed. Each Code Page like CP1252 for Western Language Alphabet, is predefined in different computer systems. When you need to used a different language, simply change different Code Page and you may use those alphabets.
Looks like we solved the problem, right? Well.. , not exactly. If we only communicate with each other using only 1 language with 1 Code Page at a time, it's working well. But what will happen if more than one languages is used in the same time? Even a single foreign character forces you to change the dictionary back and forth. And make problems worse is that not only you need a library of dictionaries, you are wasting space. Let's say if we use a 16-bit encoding in a language like English only has 26 alphabets, most bits are 0, but in order to decode a Chinese-English mixed text, you have no choice to set the bit-wise to more than 16-bit.
And the solution is the Unicode. Using dynamic encoding, the bit-wise can be predefined to any length, and even for characters that don't exist. Basically its unified different languages and assigned each of them a text-plane, hence different languages can coexisted at the same time, without the need of many dictionaries.
-------------------------- Warning! The above is for education purpose only --------------------------
There is no general solutions for character encoding, just depend on how you used it. And many problems come from cross-language display. Hence if you used only basic SBCS (Single Byte Character Sets), for 256 characters include blocks, edges, alphabets, I think it will be fairly enough for a text-based output display.
The purpose of using existing encoding is only for easy adaption from what we already had. But you can map characters to what ever binary codes you want, not even needed 8-bit, a 5-bit or 6-bit encoding is enough for simple text-based output. Since simplicity is essential in minecraft megabuild, the less the better and quicker.
P.S I've testing a small scale keyboard prototype using pressure pats. and it clearly need an encoder that uses the same coding scheme as the screen decoder, other wise they won't be compatible. We need at least a "character" to "hex" table first. (Unicode is complicated how about just a part of ANSII code? "H" is Ox48, 8-bit for each char)
Regarding the keyboard design, I have a related question. What kind of memory cell you used for storing pixels on screen? RS-NOR latch? Since it's the most simple and compact I can think of. (3x3xn)
And since we can't rely on CPU and using part of the memory as I/O buffer, I think the keyboard itself must have a build-in buffer as well. We certainly don't want the signal to be instantly transfered to CPU and wait 20 seconds+ for a typo. A more complex keyboard-screen design(I/O) is required.
Like the keyboard and screen can shared an intermediate buffer between I/O and CPU. Each cycle the CPU checks buffer for keyboard input or output code sets to the buffer.The screen receives signals from the I/O buffer, and keyboard only send signals to it. The keyboard will be more like typewriters with position keys.
The advantage of this is that you can instantly see what you type, and erase the error before sending it to CPU, which is important for a slow CPU cycle. The problem here will be the size of buffer and synchronization. (I/O interrupt) I think this will be a point to be considered when building the screen. Since the encoding will affect the buffer size, and additional control signals need to be added to determine the timing of screen refresh, whether it's constant refresh or synchronize refresh.
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
I'm a little confused. What are those things in the 8x column for?
You are probably going to stick to just one case, right? (upper or lower)
Yeah, probably just upper. I'd think about adding lower if I had the time, but I don't. :S
Is the GPU going to have state (like a cursor) or are individual characters going to be painted to screen-character addresses directly?
Should have both. I'm hoping to have plain print functions that just print to the cursor position and move the cursor to the next position, as well as printing to specific locations on the screen.
Thanks, I'll check them out. Once I've worked out what I want all the characters to look like I'll post them here or on my blog or something for comment. :wink.gif:
OK, That means each "address+set/reset" control signals can be seen as a longer 16-bit address line?
More like a 24-bit address line. There are separate set and reset lines. This means I can turn on individual pixels within each 2x4 group without turning off others.
so it is possible to turn off and on each pixel? since each character (pixels group is 6x4 =24 bit each) is larger than 8 bit wide, hence the position of each pixel is already encoded as an address?
Yep, the way I've implemented the screen allows for individual pixel control without forcing each character to be written pixel by pixel. It takes three writes to print a character, printing to 8 pixels at a time. Each pixel is essentially encoded as an address, since the address lines control which group you want to write to, and the set/reset lines determine which pixels within the group that you want to write to. I'll try and put up a clearer explanation of how it all works in the OP some time soon.
Quote from the1laz »
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
The development of character sets is chaotic and have been modified over and over many many times since the first display being available in the 70s. I think you can get a full picture to see the wiki page characters encoding.
But simply put, from the very early ANCII encoding that technically require only 7-bit to encode 128 characters, to later IBM code page system like CP437, and Windows CP1252, (8-bit or 16-bit encoding) till modern Unicode (dynamic in bit-wise), each are just a different way of translating binary signal to a finite set of characters.
The earliest ANCII can be trace back to its predecessor of telegram encoding, translate clicks into characters, and they are not bound by the number of 2^n. And when it is adapted to binary encoding, the first bit of 8-bit is then been used as a flag for indicating characters sets that is longer than 8-bit. But the core remain, and the first 128 character sets in all (almost?) encoding is mostly the same as ANCII.
Not all of the 128 slots in the 7-bit is used to display characters, 33 of them are used for control signals, like delete the previous character, end of line, or even ring a bell (no kidding you can still do this to computer alarms).
I'm hoping to implement some control signals in the "GPU", I'm sure they'd be helpful.
If the GPU has geometry fill instructions they wouldn't be that useful, but if not they provide a way to draw on the screen 3-bytes (one character) at a time without having to set the bytes individually.
Regarding the keyboard design, I have a related question. What kind of memory cell you used for storing pixels on screen? RS-NOR latch? Since it's the most simple and compact I can think of. (3x3xn)
Yep, there's an RRS-NOR latch behind each pixel.
And since we can't rely on CPU and using part of the memory as I/O buffer, I think the keyboard itself must have a build-in buffer as well. We certainly don't want the signal to be instantly transfered to CPU and wait 20 seconds+ for a typo. A more complex keyboard-screen design(I/O) is required.
Like the keyboard and screen can shared an intermediate buffer between I/O and CPU. Each cycle the CPU checks buffer for keyboard input or output code sets to the buffer.The screen receives signals from the I/O buffer, and keyboard only send signals to it. The keyboard will be more like typewriters with position keys.
The advantage of this is that you can instantly see what you type, and erase the error before sending it to CPU, which is important for a slow CPU cycle. The problem here will be the size of buffer and synchronization. (I/O interrupt) I think this will be a point to be considered when building the screen. Since the encoding will affect the buffer size, and additional control signals need to be added to determine the timing of screen refresh, whether it's constant refresh or synchronize refresh.
Not sure about all this at the moment, I just want to get the screen stuff all working before I do anything with a keyboard. It shouldn't be too hard to connect the keyboard directly to the screen though, allowing both the CPU and the keyboard to write to it. I'll probably use 7-bit encoding and I'll figure out what to do with the spare bit later.
If the GPU has geometry fill instructions they wouldn't be that useful, but if not they provide a way to draw on the screen 3-bytes (one character) at a time without having to set the bytes individually.
Oh okay. That looks really useful. I suppose that before I settle with any character encoding I should work out exactly what I'll be needing from it.
This is a screen I've been working on for it.
The screen and the CPU aren't connected yet, I'm going to make a sort of GPU thing to connect the two together. It currently is 32x24 pixels and can display 32 characters on it.
It used to be 64x30, but the save got corrupted, and it was lagging heaps, so I copied stuff over to a new world and started off making it smaller. I'll see about whether I'll make it any bigger.
The screen is controlled by an 8-bit address line, an 8-bit set line and an 8-bit reset line. The screen is made up of a whole heap of groups of 8 pixels, like this:
[] [] [] []
[] [] [] []
The groups are all stuck together to make the screen. To write to the screen, a group of 8 pixels is chosen using the address lines, and pixels can either be turned on or off using the set and reset lines.
Here's a picture of the old, bigger screen:
Watch this space for updates. More pics at http://lazcraft.tumblr.com/tagged/Screen
EDIT: I put a save up containing the screen here: http://site.lazcraft.info/downloads.php#screen
It isn't connected to the GPU or CPU, so you can manually set and reset pixels but thats about it for now. I'm just putting it up for people to check out if they're interested, since I'm taking so long to hook it up to the CPU. There's a quick tour of it here: http://lazcraft.info/post/3400838007
[SSSS]
PSN ID: biohzardzombie
Playing currently: Minecraft, Black Ops, NFS:HP, GT5
So there is 1 control line for each position on the screen, hence there isn't a multiplex to decode the address yet? And how many pre-set characters have built-in in the screen ROM? every letters and numbers? what about the decoder? like an ASCII code? or it can be manually set for each pixel, instead of a group of pixels? I think the GPU need to handle at least multiplexing address lines and a memory buffer?
P.S I've testing a small scale keyboard prototype using pressure pats. and it clearly need an encoder that uses the same coding scheme as the screen decoder, other wise they won't be compatible. We need at least a "character" to "hex" table first. (Unicode is complicated how about just a part of ANSII code? "H" is Ox48, 8-bit for each char)
No, the control lines are shared. Each group of 8 pixels corresponds to a single address from the address byte. If, for example, you wanted to turn on all the pixels in the group at address 00000001, you'd set the address to 00000001, then send 11111111 through the set line. If you wanted to turn off half of those pixels, you'd send 11110000 through the reset line. It means that each individual pixel can be turned on or off, but you can also write to multiple pixels at once. I'll put instructions up here when I upload the save. :smile.gif:
The plan is to have the "GPU" decode characters and write them to the screen by setting or resetting the appropriate pixels. When you say ANSII do you mean this: http://en.wikipedia.org/wiki/Windows-1252 ? I hadn't chosen a character encoding yet, but this looks good. :smile.gif:
For drawing characters, the ones in the 8x column here are nice.
http://en.wikipedia.org/wiki/ZX_Spectru ... age_layout
You are probably going to stick to just one case, right? (upper or lower)
Is the GPU going to have state (like a cursor) or are individual characters going to be painted to screen-character addresses directly?
BTW, if you are looking for some tiny bitmap fonts, have a look here.
http://news.ycombinator.com/item?id=1915513
There is a really nice 3x5 pixel font there. ;-)
OK, That means each "address+set/reset" control signals can be seen as a longer 16-bit address line? so it is possible to turn off and on each pixel? since each character (pixels group is 6x4 =24 bit each) is larger than 8 bit wide, hence the position of each pixel is already encoded as an address?
The development of character sets is chaotic and have been modified over and over many many times since the first display being available in the 70s. I think you can get a full picture to see the wiki page characters encoding.
But simply put, from the very early ANCII encoding that technically require only 7-bit to encode 128 characters, to later IBM code page system like CP437, and Windows CP1252, (8-bit or 16-bit encoding) till modern Unicode (dynamic in bit-wise), each are just a different way of translating binary signal to a finite set of characters.
The earliest ANCII can be trace back to its predecessor of telegram encoding, translate clicks into characters, and they are not bound by the number of 2^n. And when it is adapted to binary encoding, the first bit of 8-bit is then been used as a flag for indicating characters sets that is longer than 8-bit. But the core remain, and the first 128 character sets in all (almost?) encoding is mostly the same as ANCII.
Not all of the 128 slots in the 7-bit is used to display characters, 33 of them are used for control signals, like delete the previous character, end of line, or even ring a bell (no kidding you can still do this to computer alarms).
And when the first "Windows" (it was called Text Box") comes to play in the early text-based display, there are several characters in the set were used as edges and background block. Hence we now have ANSII art that used this special characters and create picture-like texts. Which are used for some early computer games.
-------------------------- Warning! The following is for education purpose only --------------------------
But there are some obvious problems, like not every country used the same alphabet (some even don't used alphabet at all). 128 slots or 256 slots are definitely not enough. Hence when dealing with foreign characters you need a higher byte and lower byte to form a larger 16-bit character sets, and it will hold 65536 different characters. (DBSC)
Sixty thousands sounds large enough? No, not even close. There are more than 30,000 common characters in Chinese alone, and probably 500,000 characters in total, and it's just one language (Although an unusual example). So the idea of using a code book like a dictionary for each different language is developed. Each Code Page like CP1252 for Western Language Alphabet, is predefined in different computer systems. When you need to used a different language, simply change different Code Page and you may use those alphabets.
Looks like we solved the problem, right? Well.. , not exactly. If we only communicate with each other using only 1 language with 1 Code Page at a time, it's working well. But what will happen if more than one languages is used in the same time? Even a single foreign character forces you to change the dictionary back and forth. And make problems worse is that not only you need a library of dictionaries, you are wasting space. Let's say if we use a 16-bit encoding in a language like English only has 26 alphabets, most bits are 0, but in order to decode a Chinese-English mixed text, you have no choice to set the bit-wise to more than 16-bit.
And the solution is the Unicode. Using dynamic encoding, the bit-wise can be predefined to any length, and even for characters that don't exist. Basically its unified different languages and assigned each of them a text-plane, hence different languages can coexisted at the same time, without the need of many dictionaries.
-------------------------- Warning! The above is for education purpose only --------------------------
There is no general solutions for character encoding, just depend on how you used it. And many problems come from cross-language display. Hence if you used only basic SBCS (Single Byte Character Sets), for 256 characters include blocks, edges, alphabets, I think it will be fairly enough for a text-based output display.
The purpose of using existing encoding is only for easy adaption from what we already had. But you can map characters to what ever binary codes you want, not even needed 8-bit, a 5-bit or 6-bit encoding is enough for simple text-based output. Since simplicity is essential in minecraft megabuild, the less the better and quicker.
Regarding the keyboard design, I have a related question. What kind of memory cell you used for storing pixels on screen? RS-NOR latch? Since it's the most simple and compact I can think of. (3x3xn)
And since we can't rely on CPU and using part of the memory as I/O buffer, I think the keyboard itself must have a build-in buffer as well. We certainly don't want the signal to be instantly transfered to CPU and wait 20 seconds+ for a typo. A more complex keyboard-screen design(I/O) is required.
Like the keyboard and screen can shared an intermediate buffer between I/O and CPU. Each cycle the CPU checks buffer for keyboard input or output code sets to the buffer.The screen receives signals from the I/O buffer, and keyboard only send signals to it. The keyboard will be more like typewriters with position keys.
The advantage of this is that you can instantly see what you type, and erase the error before sending it to CPU, which is important for a slow CPU cycle. The problem here will be the size of buffer and synchronization. (I/O interrupt) I think this will be a point to be considered when building the screen. Since the encoding will affect the buffer size, and additional control signals need to be added to determine the timing of screen refresh, whether it's constant refresh or synchronize refresh.
Yeah, probably just upper. I'd think about adding lower if I had the time, but I don't. :S
Should have both. I'm hoping to have plain print functions that just print to the cursor position and move the cursor to the next position, as well as printing to specific locations on the screen.
Thanks, I'll check them out. Once I've worked out what I want all the characters to look like I'll post them here or on my blog or something for comment. :wink.gif:
More like a 24-bit address line. There are separate set and reset lines. This means I can turn on individual pixels within each 2x4 group without turning off others.
Yep, the way I've implemented the screen allows for individual pixel control without forcing each character to be written pixel by pixel. It takes three writes to print a character, printing to 8 pixels at a time. Each pixel is essentially encoded as an address, since the address lines control which group you want to write to, and the set/reset lines determine which pixels within the group that you want to write to. I'll try and put up a clearer explanation of how it all works in the OP some time soon.
I'm hoping to implement some control signals in the "GPU", I'm sure they'd be helpful.
Have you ever seen a DOS ASCII-mode Pong game?
http://en.wikipedia.org/wiki/Box_drawing_characters
If the GPU has geometry fill instructions they wouldn't be that useful, but if not they provide a way to draw on the screen 3-bytes (one character) at a time without having to set the bytes individually.
Yep, there's an RRS-NOR latch behind each pixel.
Not sure about all this at the moment, I just want to get the screen stuff all working before I do anything with a keyboard. It shouldn't be too hard to connect the keyboard directly to the screen though, allowing both the CPU and the keyboard to write to it. I'll probably use 7-bit encoding and I'll figure out what to do with the spare bit later.
Oh okay. That looks really useful. I suppose that before I settle with any character encoding I should work out exactly what I'll be needing from it.
Kudos - you're doing some amazing stuff. :smile.gif: