PORTFOLIO testes C++/DIRECTX

One-man Engine Project

This was a one-man project

Direct X

This was an attempt to build a game-engine in C++ that uses DirectX as its graphicas API and NVidia PhysX as its Physics Engine. Not only the API's were implemented and experiments were done with shaders, post-effects and fluids but an attempt was made to write proper class design, do error handling, constant correctness and implement some design patterns. Multithreading was tackled to get the level loaded properly and don't let the audio-loading (XACT) interfere with the game. Serialisation was implemented to be able to save/load the current state of the world.

Please feel free to scroll down and I'll explain some of the features that were gradually put into the engine.

C++

The engine was completely written in C++. But to make a proper working engine you should write proper code. Therefore a study was done around proper class design, some do and don't's, and design patterns.
Here's a brief summation:

  1. Prefer C++ style casts, avoid C casts (Myers,Sutter)
    1. C++ has different cast operators for different situations.
    2. static_cast : cast between compatible types.
    3. dynamic_cast : used on derived pointers/references to cast them basetypes and vice versa.
    4. reinterpret_cast : casts to anything, c-style cast, dangerous.
    5. const_cast : remove constantness. Used exceptionally to give const-pointers to C libraries (C doesn't know const) or in const-incorrect libraries used in your project.
  2. Use const correctness to prevent objects from being mutated.
  3. Use assert-macro's in Debug to detect programming errors (not end-user errors), don't adjust anything!
  4. Use try-throw-catch and exception-classes to detect errors:
    1. When having an invalid precodition (invalid function input).
    2. If you fail to achieve a postcodition (saving or loading fails because hard disk is full).
    3. When you have an invalid state of an object (e.g. negative size of a string).
  5. Throw exceptions by value but catch them by reference.
  6. Use the RAII principle: Resource Acquisition Is Initialisation.
    1. Use smart pointers where possible to avoid memory leaks during exception-handling.
    2. A smart pointer is a class that contains a pointer but adds extra behaviour to it. They autodestruct the object they point to when the smart pointer goes out of scope. It also keeps a reference counting of all the smart pointers that point to the same object and will only delete the object when the last one goes out of scope.
    3. Use Boost's shared pointer as your smart pointer.
  7. Use the Singleton principle.
    1. A singleton is a way to create a global variable, without it really being global.
    2. It's a combination of a static pointer to the class, a private constructor and a static GetInstance() method that creates the object once(!).
  1. Use the Pimple principle.
    1. Way to make your class implementation private.
    2. Class 1 contains (smart) pointer datamember to Class 2, which you want to keep private.
    3. In the .cpp file you implement Class 1 and define&implement Class 2
    4. In this way Class 2 is (almost)completely invisible AND when changed the compile-time is faster because no other classes directly depend on it (except for Class 1) and thus don't need to be recompiled.
    5. The header file does not need to #include classes used in private member variables, thus compiling goes faster.
  2. Do proper class design
    1. Value classes: made on stack, no virtual functions, public destructor, public copy constructor and assignment operator, used like int, bool double.
    2. Base classes: made to inherit, usually used on heap, public virtual destructor, disable copy constructor and assignment operator (set them private).
    3. Data classes: static class, all methods are static, no object is instantiated, functions are called on class itself, private constructor.
    4. Behaviour classes: plugable behaviour, can have virtual methods, used as a base or datamember from another class.
    5. Exception classes: public destructor, no-fail constructor/copy-constructor, preferable derives from std::exception. Thrown by value, cathed by reference (see up).
  3. Prefer composition over inheritance.
    1. Instead of inheriting class after class after class untill you get a cluttered hierarchie of class use composition.
    2. Composition means that when you have a Hero class, and you want him to be able to fly, you make a FlyBehaviour class and add it as an datamember to the Hero, instead of inheriting from Hero and making a FlyingHero class. In this way your class design if more flexible.

Index Table of this article

Click the titles to proceed to a subject that explains more about a certain part of the engine

  1. First Person Controller

  2. Skinning

  3. Triggers & Pickable Items

  4. Light

  5. Geometry Shader

  6. Occlusion Culling

  7. Postprocessing

  8. Shadowmapping

  9. PhysX & Fluids

First Person Controller

The engine supports an impementation of a first person controller. Behind the screen, this controller is actually an NVIDIA PhysX controller and takes the form of a capsule for which interaction with physical objects becomes possible.

A camera is then attached to the controller and through raycasting on the negative y-axis (height) we can detect whether the capsule is touching an object. Similar methods are used for ducking and testing whether an object is above you. For jumping a basic ballistic position equation was implemented (Pos_new=Pos_old+v.t+g.t.t).

//PHYSX.cpp //Method that creates the capsule controller NxCapsuleController* PhysX::CreateController(Character* player) { //Controller descriptor: //The PhysX API (and actually DirectX also) works with //a system of descriptors. //These are structs with properties you can set. //You then pass this descriptor to a PhysX-method that //creates the object for you. //This design patterns is often refered to as the Factory Pattern. NxCapsuleControllerDesc desc; desc.radius = player->GetRadius();//Capsule radius, player is a self made Character object to get properties from //and to eventually assign the made controller to. desc.height = player->GetHeight();//Capsule height. D3DXVECTOR3 pos = player->GetPosition(); desc.position = NxExtendedVec3( pos.x, pos.y, pos.z); desc.slopeLimit = cosf(NxMath::degToRad(45.0f));//Limit angle of a slope the player can climb. desc.skinWidth=0.1f; Allows for certain intersection of 2 objects, if not the objects would //constantly vibrate because of the constant pull and push-forces. desc.stepOffset = player->GetStepOffset(); desc.climbingMode = CLIMB_EASY ; desc.upDirection = NX_Y; desc.callback =&gControllerHitReport; //Creation of CapsuleController NxCapsuleController* c =static_cast(m_pControllerManager->createController(m_pScene, desc)); c->setCollision(true);//It can collide c->getActor()->userData=player;//Assign the controller to the player. c->getActor()->setName("character"); m_Controllers.push_back(c); return c; } //Method that returns if contact is made by a raycast with a PhysX shape(NVIDIA PhysX API). //@param ray: ray to cast. //@param distance: distance to check for intersections. bool PhysX::HasContact(NxRay& ray,float distance){ NxRaycastHit hit; NxShape* result = m_pScene->raycastClosestShape(ray,NX_STATIC_SHAPES,hit,1,distance); if(shape!=NULL) return true; else return false; Character.cpp bool Character::HasContact() { NxVec3 start(m_PosW.x,m_PosW.y,m_PosW.z); NxVec3 direction(0,-1,0); NxRay ray(start,direction); bool hasContact= m_pPhysX->HasContact(ray,1.9f); return hasContact; }

Go back up

Skinning

With the help of a shader and a nice 3dsMax exportscript bone-animation data can be added to a character in game.

The script exports the position/rotation and scale of every bone in the setup rig in 3dsMax and their parent-child hierarchie. This is done in an xml file.

To animate the skeleton (or rig) another maxscript exports the data of all bones for the amount of frames chosen into a anim-file.

The weights of the vertices of the mesh (the amount a vertex is attracted to one bone or another) are stored in the imported obj file and compiled at import (a vertex can be coupled to maximum 4 bones).

As a last step the engine imports all this information when compiling the .obj mesh. The mesh has a skeleton objects that contains all animations and automatically updates the matrices each frame. These matrices are then give to the shader which transforms all the vertices (and vertexnormals) in the vertex shader according to the weights. This is done in local space before multiplying with the WorldViewProjection-matrix.

//Created by Koen Samyn, adapted by Thomas Meynen psInput VertexShader(vsInput vIn) { psInput vOut = (psInput)0; float4 vertex = float4(vIn.Pos,1); float4 tVertex = 0; float3 tNormal =0; float totalWeight =0; for (int i =0; i < 4 ;i++) { int index = round(vIn.BoneIndices[i])-1; if ( index > -1 ) { float weight = vIn.BoneWeights[i]; totalWeight+= weight; float4x4 boneMatrix = gBoneMatrices[index]; tVertex += weight * mul( boneMatrix,vertex); tNormal += weight * mul( boneMatrix,float4(vIn.Norm,0)).xyz; } } tVertex.w = 1.0f; // multiply position of vertex and wvp matrix vOut.Pos = mul(tVertex, g_mWVP); // rotate normal with world matrix vOut.Norm = mul(float4(tNormal,0), g_mWorld).xyz; //Pass texture coordinates vOut.Tex = vIn.Tex; return vOut; }

Go back up

Pickable Items & Triggers

To get a real game going it would be nice if we could pick-up some items.
Also, it would be fun to be able to do something with those items in the environment e.g.

  1. Open a door with a key.
  2. Turn of a light with a lightswitch
  3. Give a zombie a distraction snack..(i'm just saying)

These things can be done using triggers!

But how do we implement triggers?

First of all we need a some kind of an invisible box or sphere the player can run into.
This is done by using the NVIDIA PhysX BoxShape which is exactly that!
These shapes are the invisible colission shapes that can be assigned to an PhysX actor which eventually is attached to a visible mesh. Besides shapes this actor contains all information needed for PhysX to do the actually physics calculations, such as a body which represents the rigid body properties of the object (mass, damping).
In the same way the Capsule Character Controller was created above - with a descriptor object, factory pattern - these shapes are created with Shape Descriptors. In the case of a box, a BoxShapeDesc. This shape descriptor has a Trigger-flag which you can enable (or disable):

void CubePhysX::InitActor(int BodyType) { // ... if ( BodyType == GHOST ) { BoxShapeDesc.shapeFlags |= NX_TRIGGER_ENABLE; } else { if(BodyType == RIGID) { // ... } } }

Now, we create a Pickable Item class and add a trigger m_pTrigger of the type CubePhysX.

A pickable object is just such a trigger-box with a mesh attached to it.

An important part now is to set the userdate of the actor of the trigger to the current Pickable Item class. This userdate will save us soon.
void PickableItem::Initialize() { //PickableItem has a Trigger of the type CubePhysX which we defined above. //This Trigger has an actor with a property: userData. //We set the userData to the current object of type PickableItem, you will soon see why. //.... m_pTrigger->GetActor()->userData=this; //.... }

Next we have to override a PhysX method called onTrigger to tell PhysX to call these methods when another shape (e.g. our player) collides with the trigger.

//With our own written class MyPhysX:public NxUserTriggerReport // that inherits from the PhysX NxUserTriggerReport class so whe can override onTrigger void PhysX::onTrigger(NxShape& triggerShape, NxShape& otherShape, NxTriggerFlag status) { NxActor& triggerActor = triggerShape.getActor(); NxActor& otherActor = otherShape.getActor(); if (triggerActor.userData == NULL || otherActor.userData == NULL) return; //This is where the magic happens: //We get the userData from the actor of the colliding shape. //The userData of this actor was set to our class. //In this way we can cast to our class (avoid c-style cast) and call their OnTriggerEnter/Exit methods to define the behaviour. Character* player = static_castotherActor.userData ; LevelElement* object = static_casttriggerActor.userData ; if (status & NX_TRIGGER_ON_ENTER) { object->OnTriggerEnter(player); } else if (status & NX_TRIGGER_ON_LEAVE) { object->OnTriggerExit(player); } }

We then define two methods OnTriggerEnter and OnTriggerExit that define the behaviour of the trigger.

In the OnTriggerEnter function you can set the visibility of the mesh to false (the object is picked up), and add it to the player's inventory (with e.g. an ID). Of course don't forget to disable the trigger. Luckally this is done simply by calling the actor of the trigger and setting a flag:

void PickableItem::OnTriggerEnter(Character* player) { Pick up the item. player->AddToInventory ( this ); //Delete it from the world GraphNode * pParent = (GraphNode*)m_pParent; pParent->RemoveChild ( this ); //Disable triggering. DisableTrigger(); } void PickableItem::DisableTrigger() { m_pTrigger->GetActor()->raiseActorFlag(NX_AF_DISABLE_COLLISION); }

Go back up

Light

And then there was light

Light is one of the most important things to make a scene visually interesting, to make a player believe he's in some kind of world (doesn't need to be existing) where the same rules of light-tranmission are valid. People usually don't pay attention to how light in the world works, because we are accustomed to the sun, lightbulbs, LED's and shadows they all cast. However everybody will notice when something is wrong...

In games we use 'tricks' to approach realistic lighting because exact physical lighting calculations are still a bit heave on most CPU's systems nowadays (it is improving though, and raycasting(google it) in realtime is advancing quickly).

For games, lighting for meshes is calculated in shaders, little bits of programs that are run by the GPU. In DirectX these little programs are written in HLSL (if you would work in OpenGL/WebGL you would use GLSL see here).

Diffuse and specular lighting

Typically diffuse lighting(how an object is colored by a light source) is calculated by taking the dot product of the surface normal with the direction of the incoming light. This means a surface will be less lit when it is turned away more from the lightsource, which is completely understandable. These surface normals can be calculated from geometry or can be drawn from a texture which is called a normal-map.

A next step is to implement specular lighting. These are the reflection highlights of the light source you see on your objects. The more reflective the material of the object is (e.g. metal, chrome) the brighter and more concentrated these highlights will be. However, when you have a matte-surfaced object, these highlights will be more spread and less bright.

The strength (brightness) of the highlights is often called the specularity while the amount of spread is called the glossiness or shininess. These values can also be stored in a texture-map like it is done with the normal map. This gives us the opportunity to define the material properties in detail without needing a coarse mesh (lots of vertices).

Based on those textures and the properties of the light (position, direction, attenuation) we can calculate the diffuse factors and specular factors. I implemented two very common calculation models for this:

  1. Phong Shading
  2. Blinn-Phong Shading.

I'm not going to explain these models in detail (look at the shader below), just know that:

  1. When using a direct light (like the sun) the Blinn-Phong model is faster.
  2. When using point/spot-lights the Phong model is faster.

Shader

Below you see the pixel-shader where all the light calculations are done. It uses functions to calculate the fogfactor and the influence of direct lights and spotlights on the pixel.

//Simple struct that stores diffuse-color and specular-color. struct ColorsOutput { float4 Diffuse; float4 Specular; }; //Pixel shader //@param input: contains the interpolated position,uv's and normals of the vertex (no normal map in this case) //and the camera_position (input.fogDist) for the fogcalculation. float4 PSScenemain(PSSceneIn input) : SV_Target { //calculate the fog factor float fog = CalcFogFactor( input.fogDist ); //Calculate lighting factors //Initiate factors float4 diffuseFactor=(float4)0.0; diffuseFactor.a=1; float4 specularFactor=(float4)0.0; specularFactor.a=1; //Manually tweak spec power float specPower=0.5f; ColorsOutput cOut; //Calculate influence of spotlights cOut = CalcSpotLighting( input.wNorm, input.wPos, input.fogDist,specPower); //fogDist=camerapos,Eyepos diffuseFactor += cOut.Diffuse; specularFactor += cOut.Specular; //Calculate influence of direct/parallel lights cOut = CalcParallelLighting(input.wNorm,input.wPos,input.fogDist,specPower); diffuseFactor += cOut.Diffuse*shadowFactor; specularFactor += cOut.Specular*shadowFactor; //Calculate the output color based off of the factors and the ambient color (passed to shader) float3 outputColor = g_texDiffuse.Sample( g_samLinear, input.tex ).rgb * ( diffuseFactor+g_ambient); outputColor+= g_texSpecular.Sample(g_samLinear,input.tex)* specularFactor; //Get Alpha from diffuse texture float a=g_texDiffuse.Sample(g_samLinear,input.tex).a; float4 outputColor=float4(outputColor1,a); return fog * outputColor + (1.0 - fog)*g_fogColor; }

Parallellight function :

ColorsOutput CalcParallelLighting( float3 worldNormal, float3 worldPos, float3 cameraPos,float specPower ) { ColorsOutput output = (ColorsOutput)0.0; for(int i=0;i//Calculate diffuse factor, pretty straightforward //Direction of light is passed by struct. float diffuseFactor=dot(normalize(worldNormal),normalize(-g_parallelLights[i].Direction)); [branch] if(diffuseFactor>0.0)//Don't bother calculating the specular if the diffuse<0 { output.Diffuse = saturate(diffuseFactor* g_parallelLights[i].Diffuse); //no attenuation!direct light; // Blinn-Phong Specular float3 halfAngle = normalize((normalize(-cameraPos),1) + g_parallelLights[i].Direction ); output.Specular += max(0,pow( dot( halfAngle, worldNormal ), specPower)); } } return output; }

Spotlight function :

//Function that calculates the diffuse en specular values of the pixels. ColorsOutput CalcSpotLighting( float3 worldNormal, float3 worldPos, float3 cameraPos,float specPower) { ColorsOutput output = (ColorsOutput)0.0; for(int i=0; i//Calculate diffuse factor, pretty straightforward float diffuseFactor= dot(lightDir, worldNormal ); //Specular calculation if(diffuseFactor>0.0)//Don't bother calculating the specular if the diffuse<0 { float lightDist = length( toLight ); if(lightDist>g_spotLights[i].Range) { continue; } float fAtten = 1.0/dot( g_spotLights[i].Attenuation, float4(1,lightDist,lightDist*lightDist,0) ); //SpotLightFactor float spotFactor = pow ( saturate (dot (-lightDir,normalize(g_spotLights[i].Direction))),g_spotLights[i].SpotPower); //Diffuse coloration by lights output.Diffuse += saturate(diffuseFactor * g_spotLights[i].Diffuse * fAtten);//g_ambient hardcoded output.Diffuse*=spotFactor; output.Diffuse=max(0,output.Diffuse); output.Diffuse.a=1; //Phong float3 reflectVec = reflect(lightDir, worldNormal); float specFactor = pow(saturate(dot(reflectVec, cameraPos)), g_specPower); output.Specular += specFactor * g_pointLights[i].Specular * fAtten ); } } return output; }

The shader also supports fog which used to be implemented in the DirectX 9 standard functions but now has to be implemented in the shader. There are several fog-modes that define the precision of the fog calculation.

float CalcFogFactor( float d ) { float fogCoeff = 1.0; if( FOGMODE_LINEAR == g_fogMode ) { fogCoeff = (g_fogEnd - d)/(g_fogEnd - g_fogStart); } else if( FOGMODE_EXP == g_fogMode ) { fogCoeff = 1.0 / pow( E, d*g_fogDensity ); } else if( FOGMODE_EXP2 == g_fogMode ) { fogCoeff = 1.0 / pow( E, d*d*g_fogDensity*g_fogDensity ); } return clamp( fogCoeff, 0, 1 ); }

Go back up

Geometry Shader

The animated gif shows you an example of a geometry shader. After implementing pixel base lighting with vertex and pixelshader (called fragment shader in open GL's GLSL) a logical step was to begin experimenting with geometry shaders.

Geometry shaders were first introduced in Shader Model 4.0 of DirectX 10 (which by the way is the version I worked with) and their popularity has grown among the years since more and more GPUs supported them.

In essence a geometry shader allows for the creation of points, lines and triangles at runtime, on the GPU. They are executed after the vertex shader.

Mostly they are used for creating point sprites (billboards,e.g. trees in the background that are planes in stead of geometry), geometry tesselation (making meshes more detailed/complex without have to model them). But also for shadowvolume generation and single pass rendering to a cube map (have to try these out one day).

Below you find 2 examples of geometry shaders:

  1. A spike generator.
  2. A explosion (along surface normal) shader.

// Spike Generator: // Takes in a triangle and generates a new triangle in the form of a spike along the first triangles normal //Struct that represents geometry shader output struct GS_OUT { float4 Pos : SV_POSITION; float3 Norm : NORMAL; float2 Tex : TEXCOORD0; }; //Function used by geometry shader to create a vertex and append it to the trianglestream. void CreateVertex(inout TriangleStream triStream, float3 pos, float3 normal, float2 tc) { GS_OUT gOut; gOut.Pos = mul ( float4 ( pos, 1.0f ) , gWVP ); gOut.Normal = mul ( float4 ( normal, 0.0f ) , gWorld ); gOut.Tex = tc; triStream.Append(gOut); } //Geometry shader that generates the spikes. //@param vertices[3]: 3 vertices that are send from the vertex shader. //@param triStream: you have to define a TriangleStream (HLSL object) that takes your vertex struct. //This TriangleStream will hold the newly created vertices and automatically create a new triangle. [maxvertexcount(6)] //You have to state how many vertices are going to be created. void SpikeGenerator( triangle VS_OUT vertices[3], inout TriangleStream triStream) { float3 basepoint = ( vertices[0].Pos + vertices[1].Pos + vertices[2].Pos ) / 3.0f;//Take center of triangle float3 normal = ( vertices[0].Normal + vertices[1].Normal + vertices[2].Normal ) / 3.0f;//Take normal in center of triangle float3 top = basepoint + 8*normal;//Top of the new triangle // We'll create 2 new vertices left en right of the basepoint parallel to 1 of the triangles sides // These will form the base of our spike triangle. // We'll use the vector from vertices[1] to vertices[2] float3 direction = 0.1f * ( vertices[2].Pos - vertices[1].Pos); // 0.1f to make a small spike float3 left = basepoint - direction; float3 right = basepoint + direction; float3 spikeNormal = cross ( left-top,right-top); // Recreate the incoming triangle because we don't want to loose it. CreateVertex(triStream,vertices[0].Pos,vertices[0].Normal,vertices[0].Tex); CreateVertex(triStream,vertices[1].Pos,vertices[1].Normal,vertices[1].Tex); CreateVertex(triStream,vertices[2].Pos,vertices[2].Normal,vertices[2].Tex); // Make the new spike-triangle triStream.RestartStrip(); CreateVertex( triStream, top, spikeNormal, float2(0,0)); CreateVertex( triStream, left, spikeNormal, float2(0,0)); CreateVertex( triStream, right, spikeNormal, float2(0,0)); } // Exploding shader: // Takes in a triangle and repositions it in space along it's surface normal //Struct that represents the geometry shader output struct GS_OUT { float4 PosH: SV_POSITION; float3 NormW: TEXCOORD0; float2 Tex:TEXCOORD1; }; //Geometry shader that pushes the triangles outwards along their normal //@param vertices[3]: 3 vertices that are send from the vertex shader. //@param primID: this parameter is automatically generated for you by direct X, it is the ID of the input-triangle. //@param triStream: you have to define a TriangleStream (HLSL object) that takes your vertex struct. //This TriangleStream will hold the newly created vertices and automatically create a new triangle. [maxvertexcount(3)]//You have to state how many vertices are going to be created. void GSExploding(triangle VS_OUT vertices[3],uint primID:SV_PrimitiveID, inout TriangleStream triStream) { GS_OUT gOut; //Calculate surface normal of triangle float3 e0=vertices[1].PosL-vertices[0].PosL; float3 e1=vertices[2].PosL-vertices[0].PosL; float3 NormL=normalize(cross(e0,e1)); //For all vertices calculate their new position which is the current one + time*normal + random*normal //Where time is a parameter passed by the engine to the shader //and the random number is caculated with the primitive ID to get some variation is the speed of the exploding parts. for(int i = 2 ; i > -1 ;--i) { float3 PosLNew=vertices[i].PosL+g_time*NormL+(i*primID%3)/50*NormL; gOut.PosH=mul(float4(PosLNew,1.0f),gWVP); //Multiply with world-view-projection matrix gOut.Tex=vertices[i].Tex; gOut.NormW=vertices[i].NormW; triStream.Append(gOut); //Append vertex to output stream } }

Go back up

Occlusion Culling

When you're moving around in a virtual world the objects that are not shown on the screen don't really have to exist when you don't see them. What I mean by this is that it's a waste of calculation-time to send vertices of an object to the vertex shader, then maybe send them to the geometry shader, next calculate their position in homogeneous clip space to realize they are discarded in the clipping stage (after the geometry and before the rasterization stage in the shader).

The clipping stage checks whether a vertex lies within the frustrum of the camera and. If it isn't, but other vertices of the same triangle (or polygon) do lie within the frustrum then the the triangle is cut by a algorithm and a convex quad is made. But if this isn't the case all the vertices are discarded.

But, to avoid this, we'll do a check ourselfs to see if the object will be in the frustrum or not. Of course, only if the object is completely out (no vertex is within) then the object gets discarded.

Now, how do we define if an object is within or not? There are several options but the two easiest ones are:

  1. Calculate a bounding-sphere.
  2. Calculate a bounding-box or AABB (axis-aligned bounding box).

Both calculations can be done when the geometry is imported and stored within the object's class. Then we check if the object is visible or not, if not we just don't draw it:

//Get all the information from the camera so your frustrum is clearly defined: //camera position, ascpect ratio, field of view, nearplane, farplane, up-, right- and look-vectors.. //We get the current position in worldspace from the geometry we want to check for visibility. //... m_isVisible=true; D3DXMATRIX world=this->GetWorldMatrix(); D3DXVECTOR3 pos= D3DXVECTOR3(world._41,world._42,world._43); D3DXVECTOR3 d=pos-cam_pos; //We project the difference vector d on each camera-axis. float pd= abs(D3DXVec3Dot(&d,&look)); // projected distance float ph= abs(D3DXVec3Dot(&d,&up)); //projected height float pw= abs(D3DXVec3Dot(&d,&right)); //projected width //We calculate the height of the frustrum at the position of the object to know what our boundaries are. float ch=abs(tan(FOV/2)*pd); //camera height at object position float cw=abs(tan(FOV/2)*pd*AR); // camera width at object position //Check if vertex is within near-far plane if(pd + m_radius < nearplane && pd - m_radius < nearplane ) m_isVisible = false; if(pd + m_radius > farplane && pd - m_radius > farplane ) m_isVisible=false; //Check whether vertex does not exceed the frustrum's top or bottom plane. if(ph + m_radius > (ch) && ph - m_radius > (ch)) m_isVisible = false; //Check whether vertex does not exceed the frustrum's left or right plane. if(pw + m_radius > (cw) && pw - m_radius > (cw)) m_isVisible=false;

Go back up

Post Processing

Instead of rendering directly to the backbuffer, swapping and showing the backbuffer on the screen, DirectX allows you ro render to a Texture2D, for which you can choose different dataformats.

Under construction.

Go back up

Shadow Mapping

Under construction.

Go back up

Physx & Fluids

Under construction.

Go back up

Audio

Something about XACT

Go back up