Wednesday, November 24, 2010

Small Pixeldust update + Gameplay ideas?

Built a basic particle engine. So far, so good.



Instead of creating each particle individually by reading from a save file, the particle engine has built in methods for creating particle effects. All you need to do is fill parameters for, say, texture, color, gravity, position, rate, etc. I'll still be creating files to store the parameters, but this way, I'll be able to create a simple particle animation easily!

Another thing. Even though this appears on a 3d background, the particles do no depth-reading and are drawn on a 2d plane. Which is just what I want, because otherwise you wouldn't see particles below a target's feet. I had planned to create battle sequences "Paper Mario" style, where everyone is lined up on this single plane (although, even Paper Mario had their fighters spaced out a little). So the way the particles are drawn is no problem anyhow.



I also started thinking about how my gameplay for these battle sequences should be. I'm thinking, "away with all these fancy button presses", and focus on strategy. But how will the player be able to strategize?

When I think about strategizing, I'm reminded of the game "Persona 3". Once, I got into a battle and on the first round, all of my party members are dead, except for me, I have only one HP left. I decided not to run away, since that has a chance of failing. Persona 3 had this mechanic where if you hit the enemy with an element that it is weak against, it is stunned and you get an extra turn. Let's call this the "once more" mechanic. Now all of the enemies were weak against lighting, but I only had an attack that hits one enemy with lightning. Luckily, the "once more" mechanic can be repeated. So my battle went:

Initiate
Everyone dies/ I have 1 HP
Hit enemy 1 with thunder
ONCE MORE
Hit enemy 2 with thunder
ONCE MORE
Hit enemy 3 with thunder
ONCE MORE
My enemies aren't dead yet. Finish with a weaker non-elemental attack that hits all
Battle end.
POINTS!!!

Now, I don't want to make  battle sequences exactly like this for my game. But what I liked about Persona 3's battles was that I was able to get myself out of a sticky situation fast with whatever I had. Another thing I liked about this game was that status effects and buffs (poison, attack up, etc.) actually had a really good effect on battle compared to other games. Did Persona 3 have fancy attack sequences? No. Battles were simple and to the point. I think the fanciest attack sequences got in that game were when all enemies were stunned and your entire party rushed the enemies in an "all-out attack".


So... strategy. How will players win the battles? In how many ways can the player win the battle? If Plan A doesn't work, how easy is it to fall to plan B? Just how useful are the spells? Or the system altogether? Can the player take advantage of certain situations that would normally play against their favor, or turn the tables on the system? Can the system be abused in the player's favor? Can the player be... "creative"... use their imagination and understanding of the system... in how they will achieve victory?

I'd rather focus on creating a system that the player can work and strategize with, instead of creating a system that tests the player's reflexes and just looks pretty. This decision renders having to make a "Super cinematic attack sequence alpha" editor program pointless, in my opinion. The fanciest I should make my attacks are probably when the player "summons" a being to attack. (I'm thinking every player character will have their own unique 'summon', which levels up alongside them and gains more attacks as the player progresses. How useful a summon can be at any given time depends on player participation in battle [giving/recieving hits, healing and defending allies, etc.])

So this PixelDust program will be THE special effects program for all attacks. After this, I could go straight to making the player/enemy editing programs, and then the actual battle engine much faster.

Monday, November 22, 2010

PixelDust

Started working on PixelDust, particle animation program:


I'm thinking of what kinds of rules I can set for myself for making particles:

  • There will be "x" amount of particles at any given time. X is a static value that represents how many particles for animations can be shown in a single frame. I don't plan to use very many particles, so I can keep this at about 100-200. The reason I want to keep this static is because of garbage collection. I'm afraid making too many new objects (allocating more memory) a frame will kill my game, so I think it'd be better if I declared this beforehand. As for preventing "unfilled" particles from being drawn? I'll probably just have those particles turned around so the vertice order is clockwise, cull those, and only draw vertices that are set counter-clockwise.
  • All particle animations will run off the same texture file. That's because I'd like to batch all of the particles into a single draw call. I'm told Xbox360 doesn't really like so many draw calls (my scenery is drawing a lot already anyways), so I'll keep this simple. It's no big deal for me anyways, since I'd prefer for my game to have low-res point-sampled graphics.
  • No fancy shader programs. All particles should be rendered with XNA's built-in effects. Anything a Windows Phone can do, an XBox360 and a PC *should* be able to do. So I'll be setting the texture co-ordinates, positions, etc of each particle vertex in CPU.
  • Each particle animation will have it's own sound effects and target effects. If the animation is directed towards a specific target (as in the preview screen), target effects will be played.
  • Although it is previewed in 3D space, particles will really only move along the XY plane. They also won't do any depth reading, so any particles below that target that sinks into the ground will still be visible. Which brings up the question of the layering of particles. Particles will either be sorted before drawing or drawn additively (which is done with most particles anyhow)

Wednesday, November 10, 2010

Voxelcraft is now complete

I've set up the control system and the save/loading, so Voxelcraft is good to go as of today :)






It's about time, too.

I came up with an idea of some video game I wanted to make a long while ago, but I was still trying to figure out the big things like, how am I going to make the worlds? Collision? How will it be played? I had put together a scene that would depict what the game would look like:
I wanted to make an RPG in the style of an 8-bit era game, sort of like Half Minute Hero and the Bit.Trip series. It's bit of a step up from the last game I made, which was a mindless shoot-em-up where you shot down tons of Michael Jacksons using your Rubber Ducks. For that game, I developed a basic shooter engine and built levels as I wanted, but now I'm going to have to plan. RPGs focus on chains events where doing one thing in one place will let you do another thing in another place. Not completely sure how I'll do this, but I'll figure it out along the way.


I don't remember when I decided to use "3D" for my game. It felt more natural for an 8-bit game to be in 2D. Maybe I thought it'd be a nice experiment. I tried figuring out what type of 3D would be best for my game, and how complex it would be. I didn't feel like using a 3D modeling system like Blender or 3Ds Max. I wanted all geometry to be calculated in-game during loads. In the end, I decided to use cubes and only cubes for the worlds I make.

I think it's been about six months ago when I started working on Voxelcraft. And once I was able to place the first cubes, it looked something like this:


You might see a problem in this picture where there are these lines across corners. I was whining so much about this, and I tried to find out how to fix this. Eventually I found out what my problem was!

I'm stupid.

I can't have a cube with 8 vertices that share normals for lighting values. If I wanted each face to be lighted correctly, they'd need their own vertices. So now cubes have 24 vertices.

For the rest of the development of the program, it went slowly. I was learning how XNA works, and was doing other things at the time (such as schooling. Damn you school).

But I'm glad I finally have this program done. This means I can finally move on to other programs! I have so many more I have to make, but none of them will be as hard to make as Voxelcraft was. Voxelcraft is about as far as graphics for my game goes, except for particles and sprites.

Which is the next program I'll be developing. Dust. It will make all the particle animations for battles. It'll be very similar to my previous Particle program I built in Visual Basic, Pixelcraft...


I want my game to not only run on PC and XBox, but also on the new Windows Phone, with shadows and some other effects turned off (XNA on Windows Phone can't have custom shaders). Particles usually have custom shaders, but they can be drawn with XNA's built-in BasicEffect, so particles will be simple.

Another program I'm looking forward to making is one that will create "cinematic" attacks. For many games where your character has some special attack, it stops playing like a game and looks like an interactive movie for a minute. Squaresoft is rather fond of this, but don't worry, I won't make attacks sequences that drag out over a minute :-P I'm thinking that instead of having the player watch some movie, I'll have them interact with the attacks to change the outcome of it (this way, I can have bosses that have attacks that kill everyone in one hit, but can be avoided and possibly countered) I'm thinking about how the player will interact. Will it be through button presses?... as with EVERY game!?

It'll be figured out soon.

Saturday, November 6, 2010

I knew I was forgetting some kind of light!

I model Voxelcraft off of all the old-school 8-bit games. 8-bit games come from the 80's, and when I think of think of 80's I think of neon bowling for some reason. How could I NOT resist a glow effect?


To produce the glow effect, I re-render the entire scene into it's own buffer, only instead of using the normal texture atlas I use a texture file containing glow information. It's a very simple render, I don't apply any sort of lighting. Then I apply a Gaussian blur to the buffer, and blend it additively into the final image. Easy as one, two, three!

Another effect I have is fog. For most of the scene, I am taking every vertex and giving it a "fog density" value based on fog data, and how close the camera is to the vertex. I couldn't do this with the water though, since all the water vertices are far away and will give them the same tint, so I'm doing fog in the pixel shader of my water effect instead.

Yeah, I'm such a liar for saying I only had channels and saving to do. But I did do the channels bit of Voxelcraft!


I did have a bit of trouble with rotation, though. Say, for example, I wanted to make a windmill at a 45 degree angle. Then I would make a separate channel for the fan of the windmill, place it on the windmill, and spin in along it's z - axis. It wouldn't work because it's not spinning on it's own z - axis, it's spinning on the world's z-axis! But solving this problem was so easy. I was using Matrix.CreateRotation methods for the rotation. I just replaced this with Matrix.CreateFromYawPitchRoll and all is well.


Also, some other pictures of Voxelcraft, since I haven't updated on that much lately... these pictures show off more of the mini-cube, which takes a pixel from the texture atlas and maps it onto a miniature "pixel" cube.





Also, addition to spotlights.

Monday, November 1, 2010

Voxelcraft almost finished

I am almost finished with Voxelcraft. I have most of the vasic functions finished, such as texture, mini-cube, and lighting, and now all that's left to do is channels and saving, as well as a few adjustments to make the program more user friendly.
Channels will be easy. I will separate each channel into different vertex buffers and draw those separately. Each channel also contains data for rotation, translation, and scale. Cubes can be placed into those channels. This will give effects such as objects floating around, gears turning, and basically gives cubes movement. So if I wanted to make a scene where you are on top of a moving train, I'd put the train cubes and the scenery cubes on different channels. This could also allow me to easily adjust parts of the map inside game code (like if a player goes up to a door and opens it). Also, for scrolling, I'll have to copy the channel the other direction to complete the scrolling effect.
Saving is pretty self-explanatory. It will make the Voxelcraft files to be used with other programs and ultimately the game. It will save vertex data and world matrices for each channel, as well as light data.
I'm glad I got this far in development, but I am a bit disappointed in ending it here, though. It feels like there should be more things to do. Like depth blur and HDR. But that may make the project too complex. (I want to keep everything simple) Voxelcraft was meant to create simple cube worlds, not very realistic models! But I might still put such effects in later, when I work on a later development tool.
I've been working on it for so long, too. I just want to end it so I can move on. I've started it maybe six months ago (?), and stopped after I got basic colored cubes down because I couldn't figure out normals (I had no 3d experience before this) After working on another program (a 2d skeletal animation program), I decided I wanted to go back to Voxelcraft. Now I'm finally finished, and I'm quite content with it for the most part.

But I need to be working on other areas such as gameplay. Before I work on gameplay, though, there's another graphics program I need to make. It will be a 2d particle system called Dust. It will make all the attack animations and special effects used in battle for the game. I say 2D because the camera in battle will probably not be as free, it will move around but not rotate (created a 2.5D effect). Since I'm doing this, I can get away with 2D effects. Don't worry, though, camera can still rotate when it needs to, and there will be 3D particles, it just probably won't be used in battle.

Saturday, October 23, 2010

Texture select added

I can't believe it took me this long to finally decide to implement the more important parts of Voxelcraft. But now I can use multiple textures with Voxelcraft.


The pictures you see on the bottom left show which textures for each face are being used for the current cube. All of the textures are packed into a single file called the "texture atlas". The user can use the QWERTY keys to change the textures for each face. When one of those keys are pressed, a picture of the entire texture atlas appears, and the user presses the arrow keys to choose the texture.

  • Q - Front
  • W - Back
  • E - Left
  • R - Right
  • T - Up
  • Y - Down
Also, the user can press the U key to assign one texture to all sides of the cube. Very handy.

One of the problems I've run into was that the Wrap address mode for texture sampling didn't work as I thought. I thought I could wrap a portion of a texture (I'm such a noob at this :-P). So I couldn't wrap tiles anymore. Sucks. I had to think of an alternative. And I've come up with several:

I  could probably get away with using multiple vertices to emulate wrapping. But this would be very expensive if I wanted to, say, draw a 100x100 plane. That'd be 10,000 vertices (20000 primitives)! That's a lot more compared to 4 vertices (2 primitives), which I used to get before I found out wrapping wouldn't work.

So I thought I could separate the textures into different texture files. And then for each texture, I would draw each cube face that corresponds to that. And I'd switch to the next texture for the next draw call. But this'll amount to too many draw calls. I'd like to keep the number of draw calls low.

Eventually, I settled with what I thought was a really crappy method of wrapping. But it works. Originally, I had a texture atlas sized 64x64, each texture being 8x8. I decided since my textures are so simple, I could expand my texture atlas to 512x512, and have each texture that's supposed to wrap repeat itself 8 times both ways! And when I resize my cube, it'll refer to the next 8x8 tile over it, emulating a tiling effect.

This is loosely based off my "many vertices" idea, except not as many vertices. Because each wrapped texture repeats 8 times, I'll have to make more cubes to expand the wrapping effect. Going back to my 100x100 plane scenario, that'd be about 13 smaller planes both ways, and because the smaller planes aren't really connected to each other and are using it's own vertices, that would be 676 vertices (338 primitives), but it's SO MUCH better than the previous 10,000 vertices solution.


I also ran into the problem of flickering lines at the edges of each cube that has another cube adjacent to it. I'm pretty sure even professionals run into this problem a lot, but I have the problem worse because Voxelcraft is based on cubes. Not even POINT sampling will save me, which I was using in the first place.

If I couldn't solve the problem, I decided I'd hide it. I made a 256x256 image of repeating cubes.


Each cube represents a "pixel". The effect I'm getting from applying this to my textures is that each "pixel"  in the texture atlas kind of sticks out. It's similar to 3D Dot Game Hero's style of art. I was trying to avoid this so I could have a cleaner sort of look, but it looks this is the only way to solve this problem. 

I noticed that there were some strange artifacts at certain distances where this texture overlay would form strange shapes of it's own (if you can see it in the picture up top). "Duh!" I thought. "I haven't made use of mip-mapping!" Honestly, I thought mip-mapping didn't make much of a difference in graphics. But I was SO wrong. Mip-mapping cleans up a lot of aliasing, and it can make scenes render faster, since it refers to a smaller texture within the texture's mip-map "chain".

I should probably use mip-mapping for my actual texture atlas, but I hear it's impossible to do without merging pixels from neighboring textures, but I could probably just add a border instead.

Monday, October 18, 2010

War of the 300 Errors

Tonight, I dine in hell.

Today, I decided I wanted to get with the times and upgrade to the new XNA 4.0. For these reasons:

  • It's so much easier to use. I hate having to write so much code for the simplest of things.
  • I noticed while I was implementing the moon that the glow was fading to black around the edges, even though it was white when I made it. It's because alpha wasn't pre-multiplied, apparently (or something to that extent), and XNA 4.0 does the alpha correctly. I could've written a custom importer to help me with this while using XNA 3.1, but I'd rather not get into very complicated code right now.
  • XNA Creators Club won't accept 3.1 games after a while. They will only accept 4.0.
  • Reach/HiDef profiles. One's used for low-end systems (Reach), and the other for high end systems (HiDef). I can use Reach if I want to develop a game using a "limited API", which will make it easy for me to develop for older, less powerful computers. Or, I can use HiDef for the full API, and make higher quality games for XBox 360 or a powerful PC. I plan on using both, if possible.
  • I can develop for Windows Phone 7. Don't know how that would work out, but it'll be a neat experiment, at the very least.
So I got all the new stuff now, and converted Voxelcraft to use the XNA 4.0, and I started the program... only to be faced with 300 errors. As it turns out, the XNA crew did a lot of changes to the new framework. Some of these changes are very convenient, I like the new replacements for RenderState, and now I'm just able to set vertices and indices and draw them, and I don't have to deal with VertexDeclaration. They also took out a couple of things, though, most of which are no big deal (as far as I'm concerned), and since they've made so many changes to how stuff works in XNA now, that brought up those 300 errors. Their errors have blocked out the sun...

Then I shall code in the shade. Just me against the Microsoft Empire. THIS.... IS....

...not as bad as I thought. I've already gotten rid of the errors, most were pretty simple. I ended up having to take out normal maps (for cubes, not ocean) and I never use those normal maps anyways. It wasn't getting the tangent for vertices for some reason, but it always worked for XNA 3.1. But I'm glad the new XNA picked up on this error, because it's possible if someone tried to use this program on another computer, it would crash. 

Most things are working, except for water. They took out ClipPlane, which I needed for reflection/refraction. This should be easy to fix, though. I could probably use the clip() intrinsic in the HLSL code to replace it.

Saturday, October 16, 2010

Ocean

Bit.Trip Runner is probably the happiest game I've ever played. Seriously, what other game have you played where you run around collecting gold while leaving behind freaking rainbows in an atari-like world? Not that it has anything to do with this post, of course.

But implementing oceans into my engine was so much easier than I thought. It took me one day to do it. Probably because I was following Riemer's tutorials, and I suggest that you do the same thing (www.riemers.net)  I figured that the water he's done was simple enough for me to implement quickly, and just the right type of water for this type of project.

Think of water as a plane. A HUGE plane that stretches over the playing field of the game... with waves and reflections and refractions and stuff. Sounds complicated, but as far as geometry goes, it's just a single plane.

I didn't have any wave maps on hand, so I started with mirroring and basic dirty color. Dirty color is what we'll call the base color of the ocean.


At first, when you want mirror, all you need is a "reflection map". To create the reflection map, you first must create the clipping plane. The plane must be a horizontal plane set a little bit below the ocean. There should be a little offset, because once waves are implemented, it may reflect incorrectly. Next, you render your scene as normal with everything below the clipping plane culled away. But instead of rendering the scene from the view of the camera, you must render from below the ocean. To do this, you take the camera's y coordinate, and flip it over the ocean. For example, if it's y coordinate is 4 and the ocean height is 1, the reflected y coordinate is -2.

By the way, if you want to save a couple of milliseconds, you could probably get away with making the reflection map 1/2 or 1/4 the size of the screen, so it would render faster. This will result in a pixelated reflection, but blurring can be applied to the map afterwards.

Then you take the reflection map, and in the ocean shader make it sample from the map using the reflected camera's view matrix as projected onto the screen.

Adding the dirty color is easy. Simply add it into the color you get from the reflection map.

Of course, what's water without waves?


I thought I was finished at this point, since this requires making normal maps, but then I found a Normal Map plug-in for GIMP here (http://registry.gimp.org/node/69) By rendering a tiled turbulence cloud in GIMP, followed by using the normal map plugin, I got this picture:


If you're wondering what a normal map is, a normal map contains data for which way light will reflect. The rgb values of the picture represents xyz values of rays going from point (0, 0, 0) to (x, y, z). These values are in the range of [-1, 1], but a picture contains values ranging from 0 to 255. So normal maps have 0 = -1, 127 = 0, and 255 = 1. This is why this image is mostly blue, by default a texture should reflect light forward (perpendicular from it's orientation), so that value is (0, 0, 1), with the normal map equivalent being (r = 127, g = 127, b = 255).

This normal map won't be used just for lighting, though. It's used also to displace pixels in the reflection map. To do that you would first find the color of the normal map at the specified coordinates. When you find that, you must get its "perturbation", or the offset, by using that color. Multiply by the height of the wave, and you have the offset of the reflection of the map. Now when you find the coordinates of the pixel to sample from in the reflection map, just add the perturbation to that coordinate.

All that's left is refraction.



To do this, I made a refraction map the same way I did with the reflection map, only this time I'm rendering from the camera's view and not the reflected camera's. I will also cull away everything above the water. I'll get the color of the refraction the same way I get the color of the reflection, too. But this time, I'll need to know how much of the reflection color and how much of the refraction color to use. Something called the Fresnel term will be used to find this out. To get the fresnel term, you find the dot product of the normal vector (from the wave normal map) and the camera-to-pixel vector. I get the correct color value by "lerping" between the refractive and reflective colors with the fresnel term.

Lerping's a weird word. But it's so useful. I also created a texture for the ocean to replace having a single dirty color. I sample the texture using the coordinates from the bump map, and blend that in as the "dirty color", by lerping the refract/reflect color and the dirty color with the alpha of the dirty color.


And so with that, and the specular highlight of the water, I'm done. I think I'll enjoy the pixelated look to my water texture. I still have the sun to add in, which I'll probably draw into my sky cylinder. Lens flare, maybe?

Thursday, October 7, 2010

Converting from Deferred to Light Pre-Pass, and Sky Cylinder creation

I'm sick. Not only sick of this sore throat I have, but also sick of these light blobs!

But I should probably introduce my program first before I talk about that. I've been developing a program called Voxelcraft for the Starhunter Engine. Voxelcraft creates worlds in cubes, which is simple enough for what I have in mind. The game I would make with it would be reminiscent of old-school games. I'll still be using many different present-time effects such as shadowing, high-dynamic range, depth blur, and such. It's being built with Microsoft XNA.




Previously, I had a deferred renderer working beautifully on my program. Deferred rendering works by seperating the colors, normals, depth, and any other information into different render targets. Combined, this would make up the G-Buffer (the G is for geometry!) I would use the normal and depth buffers to make a buffer that contains all lighting information. Then I would lay that on top of the color buffer, and call that the final render. There were some disadvantages, notably memory bandwidth, but the main advantage of this was that it is perfect for many small lights, because for normal rendering, your number of draws was (number of objects) * (number of lights). With deferred rendering, it's (buffer creation + number of lights).  I liked it, until the part where I had to add in the sky.

My game rendered like:

Render G-Buffer-----\
|  Color/Specular   |------------------------\
|  Normal/SpecPower |----->Light buffer-------> Final Scene
|  Depth            |----/
\-------------------/

I would render the sky as a cylinder. Then I would add that into the color buffer of my G-Buffer during the initialization of it, and then render everything else in. Whenever I moved close to any light, a random blob of light would move across the sky! This is coming from the light buffer. I never had this problem before I put in the sky. I think it had to do with how I was calculating the specular for everything. So I messed around in the lighting effect, and got nothing out of it.

There is a way to draw the sky much later by using the stencil buffer from the G-Buffer draw call. But this is very hard to do in XNA, because XNA clears its stencil buffer every time the render target has been switched! I hear I can fix this by having the render target "preserve contents", but that can't work on XBox 360, apparently. And with the new XNA 4.0, I'm even more screwed because the DepthStencilBuffer has been removed completely, and has been incorporated into RenderTarget2D instead!

So stenciling in the sky is not an option. But I've been looking into a method of lighting called "pre-pass lighting". (http://www.bungie.net/images/Inside/publications/siggraph/Engel/LightPrePass.ppt) It's basically like deferred rendering, but without the color buffer (which has been giving me problems). And since there's no color buffer, the process won't take up as much memory bandwidth for the GPU. I'd still render light into its own buffer, but I would apply it by re rendering the geometry instead. This way, it knows where it should use the light buffer. This gets rid of my "random light blob" problem because it's only been happening in the sky where there's no geometry being rendered.

So now this is my rendering:
----Render sky (without light buffer)-----\
                                          |         
Render-------------\                      |    
|  Normal/Specular |------> Light Buffer  |
|  Depth           |-----/       |        |
\------------------/             |--------/
                                 V
                          Re-render scene
                          geometry w/
                          Light Buffer


Now everything is working, and the random light blob doesn't appear! But I looked into the light buffer afterwards, and saw that the light blob doesn't appear at all, and I thought that it would anyways. Hmm, must have been something with the color buffer, because I was using that in the original light buffer creation to refer to specular. Oh well, either way, the problem's fixed, and I've implemented a lighting procedure that will (hopefully) run faster!




Creating the sky itself was easy. All it is is a cylinder with a texture of 2:1 ratio, and a bottom and top texture. I read about it on http://blogs.msdn.com/b/shawnhar/archive/2008/11/11/the-sky-s-the-limit.aspx. It works for me, because since I'm using sprites for all characters anyways. (and it's incredibly awkward to display a sprite from above anyways, it'll appear as flat). I rendered the geometry in code using Cos() and Sin() functions, instead of having a cylinder model. I'm pleased with it, but the resolution (512 x 256) is too small in the pic above, so I'll probably increase that to (1024 x 512).

Next, I'll work on sun and moon placement, and water. I want to work on high dynamic range (HDR) at some point, though.