lesforgesdessalles.info Art AWAY3D 3.6 ESSENTIALS PDF

Away3d 3.6 essentials pdf

Tuesday, April 30, 2019 admin Comments(0)

Short Desciption: This books is Free to download. "Away3D Essentials book" is available in PDF Formate. Learn from this free book and enhance your skills. away3d essentials - packtpub - away3d essentials away3d is one of the book database - away3d 3 6 essentials casperson matthew pdf window or a. away3d essentials - packtpub - away3d essentials away3d is one of the most matthew pdf download - away3d essentials - casperson matthew pdf.

Language: English, Spanish, Arabic
Country: Barbados
Genre: Health & Fitness
Pages: 571
Published (Last): 28.10.2015
ISBN: 490-6-46828-242-3
ePub File Size: 21.42 MB
PDF File Size: 15.82 MB
Distribution: Free* [*Regsitration Required]
Downloads: 45634
Uploaded by: MADDIE

P U B L I S H I N G community experience distilled. Away3D Essentials. Matthew Casperson. Chapter No. "Creating 3D Text". Away3D CookbookOver 80 practical recipes for creating stunning graphics and They will guide you through essential aspects such as creation of assets in . Away3D Essentials Take Flash to the next dimension by creating Packt offers eBook versions of every book published, with PDF and ePub files available ?.

We do it to prevent the rotation when the user doesn't interact with the controller. In this form, the two leading characters define the transparency, or alpha, of the color. Chapter 7, Depth-sorting and Artifacts Solutions: Let's take a look at the code that makes up the Away3DTemplate class. Here you should notice one important detail—we assign an object called client to the client property of both NetConnection and NetStream instances, which holds references to two event handlers, namely, onMetaData and onBWDone. These concepts are easy to visualize if you apply them to a camera. Just change the code block in the onEnterFrame function with the following:

PI 58 Chapter 2 How it works We need to define two vectors in order to extract their Dot product: As depicted in the preceding image, vector A is a vector of the face direction of the camera.

We get this vector by the following code: The easiest way is to take object position and subtract the camera position from it: PI; The preceding equation is the programming representation of this formula: Don't forget to convert the angle to degrees.

If you noticed our calculated Dot product is "camera-based", that is, we checked the dot by relation of the object to the camera from the camera's point of view.

We can also do it from the side of object as well. So in this case, our tweened spheres are the objects that calculate direction vector to the camera and then use it to find the Dot product. This approach is almost identical to the first one: Changing lenses In 3D graphics, geometry is projected onto a two dimensional plane before being drawn on the screen.

This operation is essential because the computer screen by nature has only two dimensions and projection just solves that problem by converting 3D coordinates of an object in a 3D scene into 2D by projecting them. There are different types of projection methods. The most known are Perspective and Orthographic projections. There is a clear technical and visual difference between the two: That is, for instance, if you stand on a straight railroad, you can see that the rails eventually intersect somewhere on the line of the horizon also called vanishing point in computer graphics.

That is the basic example for visual representation of Perspective projection. But this type of projection is widespread in architectural visualization, mapping, and old top view arcade games. If we take the example of the rails and try to apply it to Orthographic projection in theory, the rails would never intersect because the coordinates are projected in parallel onto projection plane and therefore a rendered object projection is not subject to skew operation based on its size and distance from the camera.

Away3D gives us the ability to choose the kind of projection we wish to apply by applying the right lens type. Away3D has different types of lens found in the cameras. In this specific example, we are concerned only with Perspective and Orthogonal lenses, as the chances are really high that you would be tempted to use them. We put some geometry in order to test our lenses on. In this example, we use a City downtown model, created for these tests which is found in this chapter's assets folder.

Then create a HoverCamera3D, as it is most suitable for interactively moving around the object. We first instantiate the PerspectiveLens and OrthogonalLens classes found in the cameras. Then we change the lenses by applying one of their instances to the camera and you will instantly see the difference between Perspective and Orthographic projection: As you can see from the images, the Perspective projection gives the model a natural world perspective look, whereas the Orthographic projection looks more like an architectural sketch with the parts of the model retaining the same scale—both closer and further from the camera.

We define two types of lenses—Perspective and Orthogonal within two functions: TIMER event: Then assign the different lens to it. The Away3D team is made up of some very serious lads, therefore they could stand the temptation to write additional lenses for us.

SphericalLens gives us a slight perspective distortion of the rendered object that looks bulgy especially from closer view. It is also called the fish-eye effect. Let's add to the global variables list in the following instance: Before we run the application, let's offset a pivot point a little bit. This way, the camera zooms in and out as it orbits around the model, and you can clearly see the projection effect.

So offset the pivot by writing this line at the bottom of the parse3ds method: Want to get a cooler result? No problem, let's hack the SphericalLens class.

Go to the SphericalLens class and find its project method. This method projects the geometry on screen using view matrix, geometry vertices, and screen position of the last. We don't mess with the Matrix and Vertex stuff; instead we would like to distort some more the screen coordinates of the vertices, as those are responsible for the final rendered screen view. We can, for example, control the amount of vertical or horizontal distortion by adding a multiplication factor to the initial screenVertex calculation: Following a third-person view with a spring camera The Away3D camera arsenal has got an additional camera class called SpringCam.

SpringCam is actually a Camera3D extended into a camera with additional physical behavior which is as the camera's name implies—springing. SpringCam is well suited to create 3D person camera control systems. If you plan to develop a racing game or any other 3D person view application, SpringCam is just for that. SpringCam has got three important physical properties to set. No need to take a classic mechanics crash course to set them, but without understanding their meaning, it can take you hours to set up the desired behavior for the camera.

Stiffness—controls the stretching factor of the spring. That is how far the camera can stretch during the spring movement. The bigger value means less expansion and more fixed spring behavior. You should be careful though if the damping and mass are low. When increasing the stiffness, you can encounter some crazy spring bounces. It is recommended to keep in the range of 1 and Damping—is a friction force of the spring.

Its purpose is to control the strength of the spring bounce. The higher the value, the weaker the springs force. Mass—is the camera mass. It controls the weight of the camera. Setting it higher will slow down SpringCam movement after its target. It is best to keep the value below As such, they are interdependent. When tweaking each of them, you should take into account the values of the other two, as their magnitudes dictate to a great extent how high or low should the value be of the third.

Eventually the best approach is to experiment with different ratios till you find the best match. So that, instead of a camera, it will wrap a geometry model. It is found in this chapter's source code folder. In this program we set up environment with some geometry dispersed around so that we can have a better view of the spring camera behavior. We also create a cube primitive which imitates the target object of the SpringCam.

In a real life scenario it could be a human character or a car: We set our SpringCam in the initCamera function. The next method, named seedBuilding , creates for us cubes of different sizes and spreads them in a circular array around the scene. The most important task is to find and assign optimal values for desired camera behavior to damping, stiffness, and mass properties. As the target moves, the camera follows it adjusting its rotation to the target's direction.

The camera spring behavior is expressed when its distance to the target stretches when the last accelerates and, conversely, when its speed starts to die, the camera accelerates towards its default position relative to the target. As you move around the scene, the camera smoothly follows the character: The SpringCam can also serve as a first person camera.

We can achieve this by just two lines of code. This is when lookOffset and positionOffset properties come into play. First let's move the z-position of the camera forward so that, by default, it is located in front of the target.

In initCameraFunction , change the positionOffset to this: Do it by writing this line: And we are done! Nice FPS with just two lines of code! In such a scenario, a jet-plane is a 3D model that is located in 3D space, whereas the target marker we can make from a regular 2D shape using Flash Sprite.

The only solution we need to find is how to translate the 3D coordinates of the Jet Plane into a flash stage 2D coordinates system. There are several ways to do this, but the easiest one is by calculating the screen position of the vertices. For now, instead of an F jet-fighter, we will be using a Sphere primitive.

Copy a SWC file called graphicsLib. It contains graphics for the 2D marker that we will use to track a 3D object screen position. It is moving around using transformed x and y 3D vertex coordinates that we pick randomly from the green sphere mesh: In this program, we created an instance of a sphere that moves in the scene on a bezier path. Now, all the important things happen inside the onEnterFrame function. Note that if we don't assign a particular vertex for the second argument, then the screen function returns Screen Vertex at the center of the Object3D.

Now, because we need to transform the Screen Vertex coordinates from the coordinates system, where the x- and y-axis begin in the middle of the screen to the regular flash system with x and y being in the top-left corner, we write this code that resolves the issue by offsetting the screen vertex coordinates according to the view width and height: You can, if you wish, track each individual vertex of the given object to produce some fancy effects.

Just change the code block in the onEnterFrame function with the following: Run the application to see the result. See also The Transforming objects in 3D space relative to camera position recipe.

Transforming objects in 3D space relative to the camera position Let's say you have an idea to create a weapon marker for your FPS Camera in 3D space and not by just using a sprite object positioned in the center of the view. Or maybe you wish to create a full scale 3D inventory or navigation menu that is always positioned relative to the camera transformation.

Well, with the help of a basic vector math, you can do it in no time at all. Getting ready Create, as always, a basic Away3D scene. We are going to also use the FPSController class. Create some random geometry to have a visual reference to our 3D world.

Here we create an Away3D Sphere primitive, which is going to stay always in the center of the camera view with an arbitrary z-axis offset. In real life, you can put a weapon marker bitmap inside Sprite2D and use it instead: All the dark magic happens in the transformMarker function that runs on each frame.

The first thing we need to do is to set the marker object transformation to be the same as the camera. We do it because basically when the camera moves or rotates, you want the marker to move accordingly. To achieve this, we need to multiply camera and marker transformation matrices or in other words, transform the marker into camera space by applying to it the camera's rotation matrix with distance offset.

If we only clone the matrix of the camera and then apply it to the marker, you will still see that we have done nothing special. If you run the project now, you will see that the marker moves together with the camera and actually has the position of the camera.

We need an ability to offset the position of the marker relative to camera orientation. This is done by extracting a new position vector from multiplication with the matrix: This should give us a result of the marker showing always pixels in front of the camera.

Having said this, now you can also offset the x and y position of the marker if you need it to be in a different position than the camera view center. That vector math is not that scary as you could see in this really primitive example, but it can get much more complicated in certain scenarios as you will learn further.

Using Quaternion camera transformations for advanced image gallery viewing Quaternion, in 3D graphics, is an alternative approach to transformations. Although they are harder to grasp than Axis or Euler angles and Matrix transformation, there are obvious advantages in using them in certain scenarios.

We will not dive into any mathematical explanation of how Quaternion works as it is quite a complicated topic. Simply put, Quaternion allows us to interpolate from one transformation state into another in a smoother and shorter way using the spherical linear interpolation method SLERP.

Another advantage of Quaternions is low memory consumption. Matrix uses nine numbers to represent orientation, whereas the Quaternion needs only four. Although the Away3D transformation system is Matrix-based, the library has a Quaternion class that resides in the core.

In this recipe, you will learn how to use Quaternion methods in Away3D to produce cool camera transformation on images of 3D gallery. From this example, you will see how Quaternion smoothes different axis orientation transforms during the interpolation process. We prepare a set of several images to use for a gallery.

We also need to extend the Camera3D so that it has a public property called slerp. Extend the Camera3D as follows: For some unknown reason, at the time of this writing it lacks three important methods—createFromMatrix , quaternion2Matrix , and slerp. The first converts the matrix transformation into Quaternion, the second does the opposite, while the slerp processes SLERP.

In order to shorten the amount of code, I put the modified Quaternion class in Chapter 2's source code folder. Take it from there and put it into your project. Let's get to work! Here is the final program. The step-by-step explanation is in the next section: Such a cool effect could be a tricky thing to achieve by means of matrix transformations only: First, we disperse planes all over the scene with random position and rotation using the initGeometry method.

Each plane gets a random image assigned with BitmapMaterial. There are several important things that can happen when we click on the plane. In the onMouse3DDown function, we get the current clicked plane instance. The reason we don't use the target plane is that we want to position the camera opposite to the image and not in the same place.

Next we define a tween object that feeds tween parameters for the TweenMax.

Essentials away3d pdf 3.6

Before we can start the Quaternion transformation of our camera, we need to acquire Quaternion values for the camera as well as for its target.

We extract them from their transform matrices: During the tween, TweenMax executes a callback on each position update using its onUpdate event handler which triggers the onTweenProgress method. In that method, we convert back the current transformation values from the camera Quaternion to its transform matrix in order to rotate it.

Remember that Away3D rotations are matrices-based? We do it here: By clicking in the space outside the image plane, we trigger flash MouseEvent. Note that we have to reset the camera slerp property to zero in order to start a new interpolation. Now let's see where all the Quaternion wonder occurs: The Slerp property here is a factor that dictates the amount of interpolation to accomplish.

If you look at it in percentage, for example, if slerp is 0. The target value of one fully completes the interpolation. Today, even the most mind blowing designed website is destined to perish and be escaped by the user who is bored to death in a matter of minutes if it lacks any decent GUI animation. All the more so when we talk about Flash as this program was initially developed to bring a motion into the web.

It is obvious that 3D graphics have deeper impact on the user experience. However, no matter how cool your 3D scene looks, containing neat materials with state-of-the-art models, if it is motionless, the chance is high that all your hard work would be underrated or become a complete failure. The world around us is 3D and in constant motion. If you bring the 3D content into web, the next natural step is to animate it.

The difference here is—we are going to animate visual objects in 3D space as opposed to 2D in regular Flash applications well, since introducing Flash Player 10 it is not exactly true. We will learn animating of primitive geometry in 3D space using advanced full featured tween engines as well as using Away3D built-in animation utilities. You will also learn how to set up and animate bones in a 3D modeling program and how to control them inside Away3D.

We will see how to morph a 3D object animating its vertices and faces. We are also going to cover some advanced topics such as Inverse Kinematics that will give you some insight into what can be done with a little more math.

Let's get started. Another popular term to call this kind of animation is character rigging which means skeletal animation in professional circles. Fortunately, the Away3D team developed a full set of tools to parse and control pre-defined animation data of the model. Away3D deals with two types of external animation—Vertex and Bone animation. Vertex animation store series of individual vertices position and interpolate between them over time.

The downside of this technique is a parsing time as models with a large number of animated vertices produce really heavy files. From the other side, this type of animation is best suited for organic deformations such as face or cloth as a smooth result is required which cannot be achieved with bone animation. Bone or skeleton animation on the other hand works by transforming groups of vertices based on bones transformation matrix data. This type yields much less animation data to parse, but since, in Away3D, it is stored in Collada format which is an XML, the parsing is in many cases slower than that of MD2 vertex animations, which is binary.

Getting ready Open Autodesk 3dsMax version 7 or later. Go to this chapter's assets folder and open the spy. Alternatively, you can work with your own low poly character model in this example. In 3DsMax, go to the top menu and click Create, and then in the drop-down list, go to Systems and select biped. Now you can see in the tools side bar the Character studio menu opens.

In the active viewport, click the left mouse button and drag it to create a skeleton of the size you need. Center the Spy mesh and the skeleton at the center of the scene. Now select limb bones and move them to position in the center of the related limb of the mesh.

Away3D 3.6 Essentials Book

This part is very important because if a certain bone doesn't contain all the relevant mesh vertices in its bounding radius, it may result in really ugly artifacts and distortions when animating. The end result should look like this: In both images, you can see the skeleton bone system filling the Spy mesh. Pay attention at the left wireframe view—all the bones are contained exactly inside the mesh.

Now we have to apply the skeleton bones to the mesh in order to be able to further rig the character. For this, we use the Skin modifier. Select the Spy mesh in the viewport and in the side bar, open the modifiers drop-down list, and select Skin. In the Skin modifier parameters menu, go to the Bones dialog and click on Add button and select all the bones of the biped from the list. Now we can start animating our Spy. We can rig the character the easy or the hard way.

The hard one is to define manually the keyframes for each new motion state of character. The easy way is to use the automatic footstep system. For this example, we will choose the first.

Character studio allows us to define walk, run, and jump animations in a few easy steps. Also you can load motion capture files if you wish to have a realistic looking animation sequence. Here we will define two sequences—jump and walk using the Character studio's Footstep mode. Click the yellow pelvis of the skeleton in order to access the Character Studio interface at the side bar. In the Biped dialog, click a Footstep mode button. In the Footstep drop-down dialog, click the Jump button, then click the Create steps button in the same dialog.

Now focus on the viewport. You can see that the default mouse cursor has changed to the footstep icon. Click inside the viewport to define footsteps for the character. Set six steps for the jump mode. When you are finished, go back to the Biped side bar menu and in the Footstep Operations drop-down dialog, click the Create keys for inactive footsteps.

Now the defined animation is registered to the timeline keyframes. Now, we want to define a walk sequence. Click and drag the timeline scrubber to the frame In the Footstep Creation dialog, select Walk mode and repeat preceding steps 10 and Now the Spy walks and jumps.

As you can see here, the character's motion is defined by the footsteps system which allows us to create pretty astounding results in a matter of minutes. When exporting, in the Collada export dialog, you should check Enable Export and Sample animation checkboxes and adjust the timeline range to your defined animation sequence length. The Biped system is definitely the best choice when one needs to animate a human character. As you have seen it gives us a prebuilt IK skeleton consisting of joined bone objects.

Biped structure, being a part of the character animation system is a formidable character animation tool. You can find royalty-free sophisticated motion capture files and load them into the 3DsMax to achieve astonishingly realistic human animation effects. Of course you need to set up a custom rigging that is different from human or animal that you can do with the regular Bones system of 3DsMax. This approach is usually more time consuming than the biped setup. In this example, we touched on only a tiny part of what can be done within 3DsMax character studio.

There are many books and online tutorials that can teach you various aspects and advanced techniques of the animation creation process in this 3D program. You can use these resources to improve your animating skills. Away3D gives us advanced tools to control externally created bones and vertices-based animations. In this example, you will learn how to access and control the bones-based Collada animation. Also, you will see how to separate single animation clips into loops of different animation sequences and how to switch between them.

Create a new Away3D scene extending our AwayTemplate class. You also need to add to your assets—an animated Collada version of the Spy model. The file resides in the assets folder inside this chapter's files directory under the name spyJumpAndWalk. From the same folder, also import the spyObjRe. In the following program, we initiate an embedded animated character.

We create two control buttons which would activate two different animation sequences—"jump" and "walk", which we set in 3DsMax initially: In this method, we first access the AnimationLibrary of our model that holds all the animation data such as channels and bones: We do it by passing the name of the AnimationData.

When exporting Collada animation from the 3D modeling program using Collada exporters, the name of the animation is always "default". Therefore we pass it in the following line: The problem is that they are embedded in the same timeline and in Away3D represented as one channel set inside a single AnimationData object.

That means we have to improvise in order to be able to play these two animations separately. To accomplish that, we first add the following line inside the accessAnimationData function: So basically we can track the animation frame position during the animation play.

In the 3DsMax, we have two animation frames ranges—the "jump" animation lasts from 0 to 50 frames and the "walk" animation begins at frame 50 until In the onAnimKeyFrameEnter function, we check these ranges by writing the following code: Then the selected sequence starts playing.

When the play head of this sequence gets to one of the end frames, which are defined in the previous function, we check whether it is a "walk" or "jump" sequence and reset the play head to the first frame of the related animation span. By this setting, continuous loop is interrupted only when another button is pressed. See also Animating Rigging characters in 3DsMax.

This is a lightweight binary format and therefore parsed much faster than XML-based Collada animations. Here you will learn how to set it up and play in Away3D. You have two options in this case—just copy and paste such a name into the accessAnimationByName method parameter right from the debugger, or you can hack a code of Away3D MD2 parser so that when reading the bites for the name property, all the non-alphabetical characters will be ignored.

We will go for the second solution: In the Md2. Find the following code block and add the marked lines in the exact places as depicted here: If you want the numeric values to be included as well, just add an additional character range that embraces numbers into the preceding if statement. Also, if you are acquainted with regular expressions, you can implement them instead. Although in this scenario, it would be tricky to define a good expression.

Now set up a basic Away3D scene extending AwayTemplate. Make sure you have the spyAnimatedReady. It is found in this chapter's assets folder. Press Stand and Walk buttons to trigger different types of animation. In the initGeometry method, after the Spy model has been parsed, we call a custom method accessAnimationByName that we created to access the AnimationData of the model: Other parameters are optional such as delay, loop, interpolation, and fps.

Inside the function, we first access the model's AnimationLibrary: We call this function by clicking one of the buttons we created to trigger animations: That is all. MD2 is that easy, isn't it?

That can be achieved via several ways. You can morph the geometry by moving individual vertices or faces manually. However, before you run for a non-Away3D solution, you should know that the library has reserved a special class named Morpher that resides in the core. In the following example, you will learn how to morph a sphere primitive in runtime using the Away3D Morpher tool. In the following program, we create three spheres. The first is to be morphed. The second serves as a deformation pattern target for the first sphere.

The additional sphere has no deformation applied and serves for resetting the deformed sphere shape to the initial. Let's explain the preceding code. Inside the initGeometry method, we create three spheres: The Morpher class works in the following way. Click it and you will see that the morphed sphere becomes distorted even more. Now go along and have fun with it! Creating cool effects with animated normal maps Animating geometry with Tween engines When you need to animate your 3D objects position, tween engines are usually the best solutions not only for basic transformations, but even for powering complex transitions such as those found in particle effects.

Here, as well as in the rest of the book, we will use GreenSock's TweenMax engine which is considered to be one of the fastest and best featured on the market. It is free for non-commercial use. In the following program, we are going to create a transition of a group of sprites in Away3D space between three different initial formations—random disperse, cube formation, and sphere formation.

You will see that pretty cool effects may be accomplished when leveraging the potential of a tween engine. The credit goes to Mr.

Doob http: Please refer to the documentation of your favorite engine in order to determine whether such a feature exists.

Make sure you have the TweenMax swc library attached to your project. In the following example, we will create three arrays of vertex positions of cube and sphere primitives as well as one random array. Then, using TweenMax, we interpolate the transitions of DepthOfFieldSprite particles objects position based on the previously mentioned arrays of vertices coordinates: The first two are based on mesh vertices vectors and the last is a random formation: As you can see, all the fun stuff is inside the initTweens method.

Then we start a routine of filling a two dimensional array vertexPosArr with vertex position data, which we run three times, each time for the different geometrical form. First for a cube primitive formation: Notice that I deliberately pick up a vertex whose index is three times smaller than the one from the same one from the next loop. This way, we create a nicely recessed effect of our particles formation. Next, we run the same loop, but this time for the Sphere primitive: We define an area of pixels in width, height, and depth to calculate random coordinates: Our goal here is to create a set of tween transformations between our predefined three different formations by grouping them into sequences of tweens.

We extract three different vertices position data formations for three different types of tweens this way: By packing all these tweens into a group, we receive an animation sequence consisting of these three pairs of tweens: Now, by starting up the application, we will see nothing because what we tween are sheer numbers that represent the vertice's position in 3D space.

We need to connect our sprites that we created earlier to these numbers. We do it through the onEnterFrame method: Moving an object on top of the geometry with FaceLink You may have a scenario where you wish to animate the position of one object on top of the surface of another.

The Away3D FaceLink class, located in the away3d. FaceLink calculates for you the average coordinates of three vertices of each face and lets you tween your object from one face position to another in a sequential manner as well as to offset that position by demand.

In the next example, we will create a utility which allows us to adjust the position of a sapper relative to the Spy's mesh. This way, instead of guessing manually where Spy hand vertices are located, we run through all of them, animating the position of the sapper until it comes to the hand area.

In this example, we use a knob control to animate the sapper position from the great components library by Keith Peters called MinimalComps. You can get it for free here at http: In the following program, we will add the Spy model and his sapper kit to the scene. Using the knob control, we will animate the position of the sapper on top of the Spy mesh surface: Thanks to FaceLink, you save a lot of your precious time trying to locate the desired vertex area of your mesh: Let's go over the crucial parts of the code.

In initUI , we initialize Knob control see the documentation for more info on the MinimalComps web page. We assign the knob values range from zero to the number of total faces of the Spy model. This way, when we drag the knob to its maximum, we reach the last face position with our animated sapper object: If you passed to FaceLink just the reference to the nested mesh like this: That is because, in such a case, the mesh vertices coordinates are relative to the parent object and not to the global space.

Finally , in the onKnobMove method, which is called each time the knob values change, we shift the position of the sapper by assigning the current face according to the index value provided by knob control.

When we surf a website or play online games, we expect them to be featured with an advanced set of interactive GUI. In fact, today we can't even imagine a web page with no ability to touch, turn, and move around the content. Otherwise, we could be quite satisfied just watching a TV set just like we did in the past.

When we talk about 3D virtual environment, the interactivity plays a critical role as well. Imagine if in the real world we weren't able to control physical objects around us! The same regards to a real-time 3D application. Having a mind-blowing 3D scene with animated meshes flying all around it is not enough unless the user is also enabled to control them.

That is where concealed ultimate user experience is. Fun by Adding Interactivity In this chapter, we will learn some common as well as advanced techniques to make your project interactive. We will see how to perform advanced interactive transformations in 3D space, objects dragging, and even create a full-featured interactive car.

Sometimes implementation of advanced interactivity demands from you a decent knowledge of math. Here you will also learn that this is not as scary as it sounds. Adding rotational interactivity to an Away3D primitive by using Mouse movements In this recipe, you will learn how to add basic interactivity to an Away3D primitive that is rotated according to Mouse movements on the screen.

Although this feature is far from being rocket science, it is not uncommon to see an incorrect implementation of it in many applications which results in quite unnatural or imperfect behavior of interacted objects. In the next program, we create a typical Away3D sphere primitive.

Our goal is to rotate the sphere around its x- and y- axis according to the mouse's x and y direction movement on the stage: The magic formula from all the preceding code consists of just two lines of code which run in the onEnterFrame function: The object is being rotated by the values derived from the delta of the last and current mouse position.

More sophisticated interactivity is implemented in a system called Track or Ark Ball, in which rotation is totally 3D vector math-based and therefore it delivers results that are much more realistic.

It will be discussed in the following recipe. We add the MouseEvent. Inside the onMouseDown method, we update the following variables: These delta values then increment the rotation values of the sphere. Now let's add more realism to the behavior we created in the preceding example. It would be nice if the sphere could spin around with some kind of acceleration, if the mouse moves fast, and easing down when the mouse has been released.

Here is how you can accomplish this. In the member variables block, add these two lines: The values are in the range between zero and one. The greater the value, the less easing is applied to the sphere on each frame, and it spins faster and longer. We must update these values while we drag the rotation with the mouse. Now comment out the previous code in the onEnterFrame method and add the following instead: In the first state, we handle the sphere rotation just as we did in the first scenario.

But when we release the mouse, we are simulating the acceleration of the rotation velocity of the object with the following lines: We achieve this by inserting the following lines before applying the easing: See also Implementing advanced object rotation using vector math recipe in this chapter..

This effect is accomplished through vector calculations derived from Mouse scene position and the rotation itself is processed not around the default local axis but rather based on an arbitrary one, which is acquired through pretty simple vector math as well. The advantage of this approach is that the result is very realistic and interactively accurate. The following program creates a sphere which is rotated by a user's mouse drag using vector product calculations based on Mouse input and not simple screen mouse position delta values: The only block of code that is interesting here is inside the onEnterFrame method.

Our first goal is to calculate a rotation axis for the sphere. Because we do not want to fix the rotation to the default x-,y-, and z- local axes of the object, but rather find it according to the mouse direction angle, we need to get a cross product which returns a vector perpendicular to the two other vectors lying in the same plane. We acquire the cross product vector from the local z-axis and from the vector that we filled with the mouse X and Y positions: The next step is to create a matrix and populate its rotation portion with our new rotation axis and the angle by which we wish to rotate, and then to assign it to the sphere in order to transform it in our case, just rotate.

We set it with the following lines: Otherwise, when you start rotating, the sphere will move to the center of the coordinate system because the resulting Matrix3D translation values equal zero by default.

We fix this easily by adding the position to them manually: Math is definitely our best friend when we need to create more sophisticated results. See also Chapter 11, Creating Virtual Trackball. Creating advanced spherical surface transitions with Quaternions There are scenarios when you want to move an object on top of a surface of geometry according to the mouse coordinates.

Initially, the task is quite simple if we deal with plain surfaces. But once you would like to do such a movement on top of spherical objects, things get much more complicated. If you attempt to tween your let's call it satellite, assigning it just the coordinates derived from the surface mouse click point, you will see that if the transition distance is large, your satellite will go through the sphere mesh on its path to the destination coordinates.

Quaternion math is pretty hard to grasp for a regular human being, but Away3D supplies us with a Quaternion class which encapsulates all the required functions that processes all those complex calculations.

So the only thing we should know is how and when to use it. Getting ready Set up a new Away3D scene using AwayTemplate and don't panic as we will dive a little bit deeper into 3D math. Make sure you copy to your project a modified Quaternion class that is found in the ActionScript Classes utils folder.

The difference from Away3D Quaternion is some additional methods that are essential to accomplish our task. Alternatively, you can just copy all those additional functions into Away3D Quaternion. In this program, we set up a sphere and a small plane that moves on top of a sphere's surface at some distance to the position based on coordinates acquired when clicking the sphere with the mouse: Quaternion; private var qStart: Now let's explain step-by-step what is going on here.

The show begins in the onObjMouseDown function. First, we get the mouse click vector of the sphere surface, and scale it up a little in order to keep our satellite plain in some distance from the sphere surface: First we should calculate the cross product of current tracker position and target position in order to get a new rotation axis: Then we multiply once again the resulting Quaternion from the first multiplication with the axis Quaternion we acquired in the previous step: That is to be perpendicular to it.

In this case, we could make a more precise calculation based on the actual normal direction of each face beneath the tracker, but the final result we get is pretty much the same. You can learn more on Quaternion transformations from the following sources: Interactively painting on the model's texture Adding interactivity to your object not always means to just make it responsive to the mouse click, or transform it around the scene.

You may have much more fun if you are able to also change the way your object looks. In this recipe, you will learn how to paint on the material of your 3D object in the runtime interactively using the mouse as a brush. In this example, we use a ColorChooser component to switch between different colors from a drop-down palette it supplies or by hex values input.

In the following scenario, we set up a sphere primitive to which we apply a MovieMaterial with the input of Perlin Noise BitmapData. By clicking and dragging the mouse on the sphere surface, you are able to paint on it with different colors defined via the ColorChooser component: MovieMaterial; private var brush: First we set up a sphere primitive in the initGeometry method and assign to it three event listeners—all for MouseEvent3D: The familiar paint effect you can see in Adobe Photoshop is called an Airbrush.

Additionally, we have an event handler onMouseMove that is called with each mouse movement. The draw method receives two arguments which are uv coordinates of the material texture. Also note that we subtract the e.

That is because contrary to the Cartesian system, the v- y axis in uv or barycentric space is flipped: One limitation of the drawing technique previously described is that we draw the vector data directly in the MovieClip. If you wish to change the blending mode for each brush stroke or even apply filters to it, then you need to draw shape objects through which you are able to add filters or change BlendingMode property.

Inside the draw function, comment out all the lines and put the following instead: ADD; shape. We are done! Dragging on geometry by unprojecting mouse coordinates You can manipulate your 3D object in 3D space without any need for complex vector math calculations.

MouseEvent3D enables you to extract global 3D coordinates of the mesh based on the mouse position. The only job left for you is to assign those coordinates to any object on the scene and you have a 3D mouse-based interactive transformation. In the following example, you will learn how to move an object a small plane with the help of the mouse on top of the surface of another plane which is slightly elevated. The marker plane is moved in 3D space receiving its position from mouse screen coordinates which are transformed unprojected into 3D space: After we initiated the scene geometry in the initGeometry method, we add the MouseEvent3D.

These lines inside the onEnterFrame method take care of it: Piece of cake, isn't it? See also ff Creating advanced spherical surface transitions with Quaternions recipe in this chapter ff Transforming objects in 3D space, relative to the camera position Chapter 4 Morphing mesh interactively Have you ever dreamt of being a sculptor? Well if you had, now you have an opportunity to make your dream true.

If not, it can still be useful to know how to deform morph a mesh of your 3D object with a mouse if you plan to develop a first flash 3D real-time modeling application.

This recipe will teach you how to perform basic mesh morphing by pushing object faces in interactive ways. This is not going to be as powerful as Pixologic ZBrush, but it is all up to you as a developer to make maximum use of it. In the following program, we set up a plane primitive which we are going to be able to deform by pushing or pulling its faces according to the mouse location: Adding more polygons to the mesh will give much smoother results, unfortunately at the cost of performance: After setting up our plane in the initGeometry function, next we need to add four listeners in order to register mouse events like when user press, moves, releases, and releases outside the geometry: The actual face extrusion happens inside the onMouse3DMove method: Then, using the face.

Now you can start sculpting your Michelangelo's David. Creating a controllable non-physical car An interactively controllable car is a great feature that can be found today in many online flash applications such as casual games, featured sites the most prominent may be helloenjoy. What can have more impact on user experience than a cool Ferrari sports car racing right at your website front page while the player has complete control of its movement!

Interactive car setup in Away3D is pretty simple. The only time consuming part that can potentially complicate all the process is a car 3D model issues. The most common of those is local coordinate's mismatch of the car's parts to the Away3D coordinates system. There is no universal rule about how to avoid these problems but it rather depends on what 3D modeling software you use, as well as on the modeling routine itself.

In the following example, we are going to set up a controllable car, whereas the model was downloaded from the royalty-free stock of www.

You will see some of those model-related issues here and how to fix them within Away3D. Make sure you include in your project the following assets. Use the WASD keyboard keys to drive it around the scene, similar to the one shown in the following image: We also assign materials explicitly to each mesh object of the car because each separate part doesn't do it automatically. We assign the same baked map to all the parts and because each of them has got uv-mapping data from the 3D software it was modeled in, each wheel as well as the car body detects which part of the map is relevant to it: That step is unique to this specific model setup as there were flipped normal issues inherited from the modeling program.

In the onKeyDown and onKeyUp event handlers, we change Boolean flags indicating car moving and steering directions, which are used to define a car's movement and rotation modes inside the updateCar method: Then we add steer values for right and left steering of the front wheels. This specific models default steer wheels face forward is 90 degrees. The last step is to update the actual car movement and steering based on the values from the updateCar method. All that is accomplished in the onEnterFrame function.

We use the yaw method and not roll for each wheel mesh because of local coordinates issues in this particular case. Performing it directly on the mesh causes local objects position and rotational issues of the wheels. Keep in mind that this entire setup is a particular model-specific. Some of the settings may slightly vary when using a different car model. The last step in the onEnterFrame function is to move and rotate the whole car.

We do it in the following lines of code: We find delta rotation from the default front wheels position and current steering angle, then multiplying it by the current car speed absolute value because when the car moves backward its speed has a negative value while it is meaningless for the car steer and dividing it by a factor of to get a smaller rotation step.

In the last line, we call the updateCar method to update speed and steer values. After this, creating your own car-racing game should be a breeze! Creating a physical car with JigLib. Talking in relation to Flash content, it is hardly a secret that the highest rated stuff created with it is almost always the result of some cool effects involved, which leaves users for a few seconds with their jaws dropped.

Today when Rich Internet Application RIA rules the virtual world, it is hard to knock someone off his feet by delivering neat design or by animations going all around the screen only. Special effects are an essential part of flash applications, especially when it concerns 3D.

A third dimension adds value to the effect and the impact that you impose on the user. One may suggest that 3D is, by itself, an effect when we talk about Flash environment. That was true a couple of years ago when it was tricky to get something to look three dimensional inside your application.

But today, when 3D content is something people get used to, your challenge as a developer is to find new ways to surprise them and that is where special effects come to your rescue. Experiencing the Wonders of Special Effects In this chapter, we will walk through recipes which would not necessarily turn your stuff into a masterpiece such as "The Matrix", but you will learn a few useful techniques and approaches for special effects creation with Away3D library.

These techniques and approaches will save you a lot of time of know-how and point you in the right direction towards really advanced stuff.

Exploding geometry It may be useful in certain scenarios to know how to blow your objects into pieces, especially when you work on a shooter game. We can blow up the geometry in Away3D in a relatively easy way using a utility class Explode, which is located in the away3d.

So let's go through a quick demolition course. Getting ready As usual, set up a basic scene using the AwayTemplate class and you're ready to go. In the following program, we create a regular sphere primitive.

Then we trigger its explosion to pieces that are mesh triangles: What we need to do is to make all the triangles of the sphere faces detached from each other so that we can manipulate them freely in space.

To do this, we use the Explode class which extracts each triangle to a separate mesh. When you click the sphere, the explodeAll function is triggered. Then we first remove the scene lights for performance purposes because it is memory intensive to shade multiple flying triangles during runtime: Try this out and maybe you will find it useful in certain scenarios.

The following lines are responsible for this: Another tween is responsible for a random rotation of each triangle mesh while they are traveling away. Here, our goal is basically the same as in "Exploding geometry Simple way ".

But here we will take it to the next level, and instead of just dividing the whole mesh into triangles, we will separate it into random chunks consisting of several triangles or faces grouped together. This way, the explosion debris will look more realistic. Getting ready Set up a basic scene using the AwayTemplate class and you are ready to go. Use the following code for RandomChunksExplode. ObjectContainer3D,ta rgetObj: Face in resObj.

Unlike the simple explosion program here, the whole process is divided into two stages. First we isolate each triangle of the sphere primitive we want to explode to a unique mesh. We then run through them attaching them randomly into small mesh groups while still keeping the overall spherical formation.

Let's run through the code to see how it is done. All the fun begins when we click the sphere, the explodeAll method is triggered. After this, we remove the original sphere as we don't need it anymore: The function accepts the parameters which are, by order, ObjectContainer3D with triangle meshes, empty target ObjectContainer3D for the generated chunks, and breakFactor number which defines the upper limit for the random number of the triangles in a single chunk.

Inside the function, we use a recursive approach. After the loop finishes, the next step is to apply the initial material back to each chunk. Otherwise, you will see random color mapping of each face: If this is the case, the generateRandomExpl function calls itself, otherwise the statement is exited with a return operator and the function completes executing: Here we run a for each loop where we iterate through each chunk mesh and tween it with TweenMax in a random direction, thus simulating an explosion velocity of the chunks.

Additionally, we rotate the chunks during their translation using the same approach: Creating advanced bitmap effects using filters Bitmap processing based on Flash generic filters in correlation with BitmapData input is widely used to create quite stunning visual effects in 2D as well as in 3D. Unfortunately, at the time of this writing, Away3D has no built-in functionality for bitmap effects like its rival PV3D has BitmapEffectLayer , nevertheless we can easily overcome this by writing those effects from scratch by ourselves.

This is unlike the regular sphere primitive, where the triangle faces that make up the sphere are smaller towards the top and bottom.

The following image is of a geodesic sphere primitive: Notice how the triangles that make up the regular sphere are much smaller towards the bottom than they are around the middle. The geodesic sphere will produce a more rounded shape compared with the standard sphere using the same number of triangle faces.

Because of the way in which the UV coordinates are assigned to the geodesic sphere, it is not useful for displaying bitmap materials. See the following section on the sphere primitive for more information. The initGeodesicSphere function is used to create and display an instance of the GeodesicSphere class. Defines the level of triangulation, with higher numbers produce smoother, more detailed spheres.

Grid plane The grid plane is a grid of rectangles, and it is a handy primitive for judging the position of other 3D objects in the scene. Combined with the trident primitive which is covered in the following section to show the scene axes, it is very easy to replicate the look of a simple 3D modeling application. As you can see in the following screenshot, grid planes allow you to instantly see a 3D object's position relative to the origin or any other point in space , which can be invaluable when debugging.

This allows it to display rectangles rather than triangles, which is how the plane primitive which is constructed using triangle faces is shown when it has a wire frame material applied to it. The initGridPlane function is used to create and display an instance of the GridPlane class. This property defaults to the value assigned the segments property. Creating and Displaying Primitives LineSegment The LineSegment class is another example of a primitive that is built using segments rather than triangle faces.

It can be used to display a line between two points in space. For convenience, you would probably use the LineSegment class rather than build a mesh and then manually add a segment to it as we did in the section The basic elements of a 3D object.

The initLineSegment function is used to create and display an instance of the LineSegment class. A value of will create a line segment that starts at , 0, 0 and ends at 50, 0, 0. If specified, this parameter will override the default end point defined by the edge parameter. Sets the start point of the line segment.

Pdf away3d 3.6 essentials

If specified, this parameter will override the default start point defined by the edge parameter. Plane The plane is a flat, rectangular shape that is only visible from one side, by default.

When it is viewed from behind, the back-face culling process which is used to improve the performance of an Away3D application by not rendering the back side of a triangle face will prevent the primitive from being drawn. Setting the bothsides init object parameter to true will override this behavior, and ensure that the plane is visible from behind as well as from the front.

Away3D Cookbook - PDF Free Download

The initPlane function is used to create and display an instance of the Plane class. This property defaults to the value assigned to the segments property. RegularPolygon The Triangle class creates a three-sided primitive, and the Plane class creates one with four sides.

The RegularPolygon class is a little more flexible, and can be used to create regular shapes with any number of sides as long as it is more than three.

Just like the Plane class, an instance of the RegularPolygon class will not be visible from the back unless the bothsides init object parameter is set to true. The initRegularPolygon function is used to create and display an instance of the RegularPolygon class.

Larger numbers increase the triangle count of the polygon. RoundedCube Defines the radius of the polygon. The RoundedCube class produces a cube that has rounded edges.

It uses significantly more triangles than the Cube class, so care should be taken to use the RoundedCube class only in situations where the rounded edges can be seen. A rounded and a regular cube off in the distance will look much the same, but the additional triangles used to construct the rounded cube will still take an additional power to process. SeaTurtle Description Defines the width of the cube. The sea turtle can't really be considered a primitive shape, but can be created and used just like any other primitive.

The sea turtle features heavily in demos created by Rob Bateman, one of the core Away3D developers, whose personal website is http: Chapter 6, Models and Animations, will cover the model formats supported by Away3D, and how they can be exported into ActionScript classes, in more detail. The texture used in the following screenshot can be found in the example download package on the Away3D website.

It has the file name seaturtle. The initSeaTurtle function is used to create and display an instance of the SeaTurtle class. The scale init object parameter, used here to uniformly scale down the size of the 3D object, is interpreted by the Object3D class, which is inherited by the SeaTurtle class.

Chapter 3, Moving Objects, covers scaling in more detail. The dimensions of the skybox are , x , x , units. This compares with the default dimensions of the Cube class, which are x x units.

A skybox is designed to enclose the entire scene, including the camera, and usually has a material applied to it that displays a panoramic view displaying the world beyond the scene. The following image shows two shots of the skybox from the outside, looking through the back of the cube. Usually the camera and all the other 3D objects in the scene will be enclosed by the skybox, but from the outside you can get a sense of how the six sides of the cube can be used to enclose the scene.

The skybox on the left has had some simple numbered bitmap materials applied to each of its sides. This makes it easy to see how the materials passed into the Skybox constructor map to the final result. The skybox on the right has had some specially formatted skybox textures applied to it. This is how a skybox would look in an actual Away3D application. The initSkybox function is used to create and display an instance of the Skybox class. Instead, it takes six parameters, each one defining a material to be displayed on each of the six sides of the cube.

These parameters are listed in the following table: Skybox6 The Skybox6 class is used to create a skybox, just like the SkyBox class. The only difference is that it takes one material divided into two rows and three columns much like the Cube class when the mappingType parameter is set to map6 , with each of the six segments then being applied to one of the six sides of the cube. The following figure is a sample of a texture that can be used with the SkyBox6 class: The initSkybox6 function is used to create and display an instance of the Skybox6 class.

Instead, it takes a single parameter defining the material to be displayed on the cube. Sphere The Sphere class is the second class that can be used to create a spherical 3D object. The initSphere function is used to create and display an instance of the Sphere class. Defines the radius of the sphere. It has already been noted that the GeodesicSphere class produces a more uniform sphere than the Sphere class. So why would you use the Sphere class? The answer become apparent when you apply a bitmap material to 3D objects created using both classes.

The following is a screenshot of a 3D object created by the Sphere class. As you can see, the material is neatly applied across the surface of the sphere. It's clear that while the GeodesicSphere class may produce a higher quality mesh, the UV coordinates are a bit of a mess. On the other hand, the Sphere class will apply a material in a much more consistent and usable fashion. However, this is only an issue when using bitmap materials. Torus The Torus class creates a doughnut-shaped 3D object.

This parameter cannot be larger than the radius parameter. Defines the overall radius of the torus. Triangle The Triangle class is built from a single triangle face.

Like the Plane and RegularPolygon classes, instance of the Triangle class will not be visible from the rear unless the bothsides init object parameter is set to true. Trident Sets the size of the triangle. The Trident class creates three-colored arrows that represent the X, Y, and Z axes. If the showLetters parameter is set to true each of these axes will also be labeled. It is very useful for debugging, as it can be used to show the orientation of a 3D object.

Chapter 3, Moving Objects, explains how the orientation of a 3D object can affect the results of certain functions. The following two parameters in the table are passed directly to the constructor as regular parameters. The length of the trident axes.

Summary All 3D objects are constructed using a number of basic elements: These elements are then combined and added to a Mesh object to create more complex shapes, and we saw some example code that demonstrated how this can be done manually. Texture maps are applied to the surface of a 3D object using UV coordinates, which define how a texture map is displayed by a triangle face.

Away3D includes a number of classes that allow primitive shapes like cubes, cones, spheres, and planes to be easily created without having to manually construct them from their basic elements.

A sample application was presented that demonstrated how these primitive 3D objects can be created and added to the scene. The differences between similar primitives, like the sphere and geodesic sphere, were highlighted. We also touched on some additional topics that will be covered in more detail in later chapters.

The cube and skybox classes have some unique ways of applying materials, and the sphere classes showed some significant differences in the way they applied materials. For these classes, we used the BitmapFileMaterial class, which will be covered in more detail in Chapter 5, Materials. The sample application also made use of the rotationX, rotationY, and rotationZ properties from the Object3D class to modify the orientation of the primitive 3D objects. These parameters are explored in the next chapter, in which we will learn how to move, rotate, and scale 3D objects within the scene.

We touched on this topic in Chapter 2, Creating and Displaying Primitives, where the primitive 3D objects created by the PrimitivesDemo application were rotated by modifying their rotationX, rotationY, and rotationZ properties.

This chapter will explore in greater detail how 3D objects can be transformed within a scene. In Away3D, these coordinates can be described from three points of reference: Understanding the difference between them is important because all movement, rotation, and scaling operations in Away3D work relative to one of these three coordinate systems.

Moving Objects World space The global coordinate system represents points or vectors relative to the origin of the scene. This coordinate system can also be referred to as world space. The following is the code that was used to create the sphere: This position is relative to the position of the 3D object's parent container.

In this case, the sphere has been added as a child of the scene. This means that the position of the sphere in world space is 0, 0, This coordinate system is also known as parent space. It was noted in the previous section that the position assigned to a 3D object via the x, y, and z properties is relative to the position of its parent container.

When the parent of a 3D object is the scene as has been the case with the examples presented to this point , parent space and world space are the same. However, the scene is not the only container that can hold 3D objects as children. So, just as we have used the Scene3D object as a container for our 3D objects, so too can we use the ObjectContainer3D object. ObjectContainer3D; import away3d. The container itself has no visual representation, so even though we have added it to the scene, the container will not be visible.

This will position the sphere at the origin of its parent. The position of the container in parent space is 0, 0, Since the parent of the container is the scene, the position of the container in world space is also 0, 0, The position of the sphere in parent space is 0, 0, 0.

Given that the global position of the sphere's parent container is 0, 0, , and the sphere sits at the origin of its parent container, the global position of the sphere is 0, 0, This coordinate system is also known as local space. A number of functions to move, rotate, and scale a 3D object operate in local space. To demonstrate this let's create a new example called LocalAxisMovement. We will rotate the sphere around the Y axis by negative 90 degrees.

This has the effect of modifying the orientation of the sphere's local axes. Note the movement of the sphere along its local Z axis indicated by the red arrow —the positive end of the Z axis is generally considered to be the forward direction in Away3D.

Away3D includes a number of additional properties and functions that can also be used to move a 3D object within the scene.

Away3D 3.6 Cookbook

The x, y, and z properties The initial position of a 3D object can quite often be specified with the x, y, and z properties of the init object. This method has already been used in previous examples presented in the book. This method sets the position properties once when 3D object is created, rather than letting them be set to a default value and then later modifying them, making this the most efficient, and therefore preferred, way to define the initial position of a 3D object.

This is true for the x, y and z properties, and the position property. It is possible to find the position of a 3D object within the scene or the world position by accessing the scenePosition property, however this is a read-only property.

The position property The position property can be set with a Vector3D object, which specifies the position of the 3D object relative to its parent on the X, Y, and Z axes all at once. The following image shows how these directions relate to the local X, Y, and Z axes. The moveTo function The moveTo function works in much the same way as the position property in that it can be used to set the position of a 3D object relative to its parent along all three axes at once.

The call to the moveTo function in the following example achieves the same result as assigning a Vector3D object with the values 20, , 10 to the position property.

The first parameter, axis, defines the direction to move. The length of the vector is not used when calculating the distance to move it is normalized, or modified so it has a length of one unit, by the translate function.

It is the second parameter, distance, which defines how far along the vector the 3D object will move. The following example would move the cube to the same position as the other move functions shown previously. We know that the axis vector will result in the cube moving the same direction as it was moved in the other examples because we have constructed it using the same values for the X, Y, and Z axes that is, 20, , and We constructed it using these values for convenience; remember that, because it is normalized, it is only the direction that these vector points count.

So we know that the axis vector only defines the direction in which to move, and not how far to move. Therefore, to move the cube to the same final position as the other move examples, we need to know the length of the vector 20, , We can calculate this using Pythagoras' theorem, which states that the length of a vector can be calculated as the square root of the sum of the squared lengths of its component axes. From this we can calculate the length using the code Math.

The rotation init object parameters The initial rotation of a 3D object can quite often be specified using the rotationX, rotationY, and rotationZ init object parameters.

These values represent the rotation of a 3D object around the X, Y, and Z parent space axes, and are measured in degrees.

Flash functions like Math. Be mindful of how angles are measured when using different functions. You can convert radians to degrees using the formula: The following example code achieves the same end result as setting the rotationX, rotationY, and rotationZ properties individually as described previously.

By combining rotations around these fixed axes you can rotate a 3D object to any desired position. However, in some situations it is easier and quicker to rotate a 3D object around a single arbitrary axis. The rotate function can be used to do just this. The following code will rotate the sphere 90 degrees around the local space vector 1, 0, 1. This second parameter is called upAxis.

This vector, in parent space, is used in conjunction with the local Z axis of the 3D object once it has been rotated to face the target position to define a plane.

The 3D object is then oriented so its local Y axis lies on this plane, while also being at right angles to the local Z axis. The following code will rotate the camera so that it is looking at a position 10, 20, 30 units from the origin of the camera's parent container. The cube in the following image, seen from above, has been rotated 45 degrees around the Y axis.

The position of the pivot point can be defined by assigning a Vector3D object to the pivotPoint property. The position of the pivot point is defined in local space. Here we have set the pivot point units to the right of the cube. The movePivot function The movePivot function can also be used to set the position of the pivot point. The difference between the pivotPoint property and the movePivot function is that you do not have to create an intermediary Vector3D object when using the movePivot function.

The following code has the same effect as setting the pivotPoint property at a position of , 0, 0. The scenePivotPoint property is read-only, so you can not assign a new position for the pivot point through it. These concepts are easy to visualize if you apply them to a camera. Modifying the pitch will make the camera look left and right. Modifying the yaw will make the camera look up and down. Modifying the roll would be like turning the camera upside down.

A 3D object can be scaled uniformly along all three local axes, or along the local X, Y, or Z axis individually. The scale init object parameter The initial scale of a 3D object along all its local axes can quite often be specified with the scale init object parameter.

This scales the 3D object uniformly along all three axes. In this example, we have scaled the sphere to twice its default size. You can also supply negative numbers to these scale functions and properties, which has the effect of turning a 3D object "inside out". This achieves the same result as the scale init object parameter, and can be used on those 3D objects that do not implement init objects. Here we have used these three properties to scale a 3D object by a factor of two along the X axis, by a factor of 3 along the Y axis, and by a factor of 4 along the Z axes.

Even though we only work in three dimensions, a 4x4 matrix is required to contain the information used by all the transformations supported by Away3D, including scaling, rotations, and translations, in a single matrix.

Matrices are represented by the Matrix3D class in the flash. It is possible to manipulate a transform matrix directly, and then pass it to a 3D object via the transform property defined in the Object3D class.

However, it is generally more convenient to use the listed functions to transform a 3D object rather than modifying the transform matrix directly.

Tweening In a number of the applications presented so far, the 3D objects in the scene have been transformed slightly each frame inside the onEnterFrame function. Another common method for modifying the properties of an object over time, including those properties that define the transformation of 3D objects, is called tweening.

There are a number of free libraries that can perform tweening operations, one of which is the GreenSock TweenLite library. TweenLite can be downloaded from http: Although it can be freely downloaded, TweenLite does have some licensing restrictions, which you can view at http: To demonstrate how TweenLite can be used with a 3D object we will create an application called TweeningDemo.

Vector3D; import away3d. We then call the lookAt function to orient the camera so it is looking at the origin of the scene. Because we did not specify a position in the constructor, the sphere will initially be positioned at the scenes origin.

The to function is used to progressively modify the properties of an object over time. The first parameter, target, is the object that will be modified by the tweening operation. In this case that object is our sphere.

The second parameter, duration, defines how long the tweening operation will take in seconds. By supplying a value of 1 here the properties of the sphere will be progressively modified from their current values to new values that we define over a period of one second.

This is the same way that the init objects used by Away3D are created. In this object we assign the values that we want the 3D object to have once the tweening operation is complete. The properties x, z, scaleX, scaleY, scaleZ, and rotationY relate to properties that are exposed by the Sphere class.

In this example, we have assigned random values to these properties. The final property of the vars object, onComplete, is a special property that is recognized by the TweenLite class.

Any function assigned to this property will be called once the tweening operation is complete. Here, we have assigned the tweenToRandomPosition function. Since this tweening operation is created in the tweenToRandomPosition function, this has the effect of creating an endless sequence of tweening operations. As one tweening operation is completed, the tweenToRandomPosition function is called and a new one is started. The scale of the sphere is modified to be within 0.

One of the best things about tweening operations is that they tend to be "fire and forget". You don't have to keep a track of how much time has passed and manually modify the properties every frame, as has been done in previous applications in the onEnterFrame function.

In fact, you will notice that we have not used the onEnterFrame function at all in this example. TweenLite includes a lot more functionality than has been covered by this book, and indeed the TweenMax library, also from GreenSock, includes more functionality still.

To explore these additional features, you can use the interactive demos found on the GreenSock website http: Just remember that tweening libraries generally have no inherent concept of 2D or 3D—they just modify the values of given properties over time. The only difference between tweening a 2D object and a 3D object is the modification of properties relating to the third dimension along the Z axis.

Nesting As we saw with the parent coordinate system, it is possible to add 3D objects to a parent container other than the scene. Adding 3D objects to parent containers in this way is referred to as nesting. Nesting is used to transform a group of 3D objects simultaneously. A parent container can be moved, scaled, or rotated, which in turn will transform its children 3D objects. Imagine you were creating a shoot 'em up style of game where each space ship can be matched with a variety of guns, with each gun represented by a distinct 3D object.

While this could be achieved by providing a separate model for combination of ship and gun, such an approach would quickly become unworkable as the number of combinations increased. If you had five ships, and each ship should be matched with six guns, you would need to supply 30 individual models. A better solution would be to model each of the ships and the guns separately, and combine them at runtime to form the necessary combinations. Here is a screenshot of the gun: Here is a screenshot of the ship: The following NestingDemo class demonstrates how the ship and gun 3D objects shown in the screenshots can be added to a container so they can be transformed as a single group.

Mesh; The Cast class provides a convenient way to cast objects between types. In this example, it is used in conjunction with the BitmapMaterial class, which will be used to apply a material the 3D objects that we will be adding to the scene. Cast; import away3d. BitmapMaterial; import flash. Class; [ 94 ] Chapter 3 The container property will reference the parent container that the ship and gun models will be added to. This will then be applied to the ship and gun 3D objects.

It will be referenced by the fighter variable, and is used to represent the space ship. We create a new instance of the Gun class, and apply the material to it. This means the position of the 3D object cannot be set using an init object, so the position is instead set via the x, y, and z properties after the object has been created. This second gun is positioned on the opposite side of the X axis.

We want to be able to work with these three 3D objects as if they were a single item. This is where nesting is useful. First we create a new container, and add the separate 3D objects as children by supplying them as the first three parameters of the ObjectContainer3D constructor. In fact, we have already moved the 3D objects as a group by specifying the initial position of the container when it was constructed.

The onEnterFrame function also transforms the 3D objects as a group by modifying the rotation of the container. Summary The transformation of a 3D object takes place within three distinct coordinate systems or spaces.

Coordinates in world space are defined relative to the scene, coordinates in parent space are defined relative to a 3D object's parent container, while coordinates in local space are defined relative to an individual 3D object. Away3D includes over a dozen functions and properties that can be used to modify the position, rotation, and scale of 3D objects. Each of these functions transforms a 3D object within one of the three coordinate systems. Tweening can be used to modify the properties of an object over time, and we saw an example of how the TweenLite library can be used to transform a 3D object without having to manually transform it every frame using the onEnterFrame function.

Finally, we saw how nesting can be used to transform a group of 3D objects simultaneously by placing them in a container, like the ObjectContainer3D class, and then transforming the container. Away3D uses what is known as the painter's algorithm to draw the elements that make up the scene to the screen, and it is very easy to take this process for granted, as most of the time Away3D will draw these elements in the correct order. However, there are certain situations where it is necessary to tweak the order in which Away3D sorts the 3D objects within the scene.

This chapter demonstrates one such situation, and presents the methods that are available to manually correct the sorting process. Away3D also includes some additional renderers that can be used to automatically correct the sorting order of the 3D objects within the scene.

These renderers are demonstrated, and their implications on the performance of an Away3D application are explored. The following image, from the Wikipedia article on the subject, shows the steps that are taken to paint an outdoor scene. The mountains, being furthest back in the scene, are painted first. The ground and shrubs are then painted, and finally the trees are painted over both. Z-Sorting, or depth sorting, is a technique that is used to sort the elements that make up a 3D object based on how far away they are when viewed by the camera.

This then allows these 3D object elements to be rendered to the screen using the painter's algorithm, in order from those furthest back in the scene to those that are the closest. For the most part, this algorithm works fine and there are no additional steps that need to be taken to render the 3D object elements in the correct order.

However, there are situations where this algorithm fails. To understand how the painter's algorithm can fail, we first need to look at how the elements that make up a 3D object are sorted within the scene. Sorting the scene The distance of each element within the scene is determined by a single value, known as the z depth.

The z depth value is calculated using the average position of each vertex that makes up an element along the camera's local coordinate Z-axis.

An easy way to visualize the camera's local space is to imagine that the camera is sitting at the origin, and is looking directly towards the positive end of the Z-axis. This is illustrated in the following image: These coordinates are in the camera's local space. To calculate the z depth, we take the Z components of these coordinates which are , , and 90 , and average them to give a final value of This single value of is then used as the z depth the 2D representation of the triangle, even though the depth of the individual vertices ranges from 90 to Sorting the 3D object elements by their z depths can lead to inconsistencies when the single averaged z depth value does not accurately represent a drawing primitive's relative position within the scene.

To demonstrate a situation where the 3D object elements are not sorted correctly, let's create a new example called ZSorting. In the initScene function we will create two triangles, angled so that one appears to overlap the other from the camera's viewpoint. Triangle B has a slightly smaller z depth than Triangle A, and the triangles do not intersect. From the top-down view of the scene, it is clear that the triangle on the right Triangle B should appear behind the one on the left Triangle A.

This results in Triangle B being drawn last, over the top of Triangle A. Here is a perfect example of where a single average z depth does not accurately reflect the actual depth of the 3D objects in the scene. Adjusting the sorting order Away3D includes a number of methods that can be employed to adjust the order in which the drawing primitives are rendered.

In the example provided, the rendering order can be fixed by either bringing Triangle A to the front of the scene, or by forcing Triangle B to the back. The ZSortingExtended application available from the Packt website provides an example that implements the following procedures for correcting the sorting order of 3D objects in a single demo.

The pushfront and pushback properties The pushfront property forces a drawing primitive to be sorted based on the point that is closest to the camera. For Triangle A, the closest point to the camera is Point 1. Because Point 1 is closer to the camera than the z depth of Triangle B, setting pushfront to true for Triangle A will bring it to the front of the scene, meaning it will be rendered last.

For Triangle B, the furthest point from the camera is Point 2. Because Point 2 is further away than the z depth of Triangle A, setting pushback to true for Triangle B will push it to the back of the scene, meaning it will be rendered first. Away3D will add the screenZOffset value to the z depth, which allows you to adjust the relative depth of a 3D object within the scene. A positive screenZOffset increases the z depth, forcing a 3D object to be considered to be more towards the back of the scene.

A negative value will decrease the z depth, forcing a 3D object to be considered to be closer to the front of the scene. Note that setting the screenZOffset value will not change the position of the 3D object within the scene, only the order in which it is drawn. In the following example, Triangle A will have a z depth of , placing it in front of Triangle B, which has a z depth of In the following example, Triangle B will have a z depth of , placing it behind Triangle A, which has a z depth of With Away3D version 3.

The screenZOffset will be applied regardless of whether the ownCanvas property is true or false. A canvas is a layer into which 3D objects are drawn. They work in much the same way as layers in image-editing software packages like Photoshop.

First, we set the ownCanvas property to true. Because Triangle A is units away from the camera, we can specify the z depth of the canvas that will display Triangle B to be slightly more at This will mean that Triangle A is considered to be closer to the camera. By giving the canvas onto which Triangle B is drawn a larger screenZ value that the z depth of Triangle A, we have forced it to be drawn in the background.

Be careful when setting the depth of a canvas via the screenZ property, because unlike the other methods of correcting the depth of a 3D object, the screenZ property is an absolute value and does not take into account the relative position of the camera.

If the camera were at a position of 0, 0, , as it is by default, the preceding code would draw the canvas that holds Triangle B in front of Triangle A, because the z depth of Triangle A would be , and the z depth of the canvas would be set to the absolute value of This means that the order of a canvas in the scene can be modified by using the pushfront, pushback, and screenZOffset properties on the 3D objects that will be drawn into it.

A note about Z-Sorting All of the methods described above work by modifying the z depth of a 3D object relative to the other 3D objects that are in the scene. It is important to realize that the desired relative depth of these 3D objects will change depending on the position of the camera. Consider the same scene created by the ZSorting application, but now viewed from the opposite side.

In this situation, if we were to set the pushback property to true for Triangle B, as we did to fix the rendering order when the camera was on the left, we would in fact be introducing a z-sorting error rather than fixing it.

This is because from the camera's new position, Triangle B should be drawn in front of Triangle A, not behind it. An example of where it is necessary to adjust the value of the screenZOffset property for a 3D object, as its position relative to the camera changes, is given in Chapter 10, Creating 3D Text, with the FontExtrusionDemo application.

Additional renderers All the applications shown in this book have made use of the default renderer. Away3D includes three different types of renderers, each returned by static properties in the Render class from the away3d. Using these renderers is quite simple. First the Renderer class is imported. Camera3D; away3d. Scene3D; away3d. View3D; away3d. Renderer; away3d. WireColorMaterial; away3d. Triangle; flash.

Sprite; flash. Event; flash. KeyboardEvent; flash. Triangle; private var box: Cube; public function RenderersDemo: BASIC; break; case With the default BASIC renderer, shown on the left, the triangle appears to be behind the cube, despite the fact that the two 3D objects are actually intersecting. The scene created by the RenderersDemo application is trivial, involving only a single cube primitive and a single triangle primitive.

In this case, switching between the three renderers probably won't have a great deal of impact on the performance of the application. But what happens in a more complex scene? The RenderersPerformanceDemo application creates a number of spheres that bounce around inside an invisible box. Just like the RenderersDemo application, you can switch between the three renderers at runtime. Renderer; import away3d.

Sphere; import flash. Vector3D; import flash. KeyboardEvent; import flash. Chances are, the application went from being smooth and fluid to quite jerky.

And that's if it doesn't just throw an error about the script time out. The RenderersPerformanceDemo application demonstrates the performance limitations of the more advanced renderers.

For all but the most simple of scenes it is generally best to try and correct the sorting order of 3D objects manually in order to maintain a reasonable frame rate. We then saw the method in which Away3D determines the distance of a 3D object from the camera, which then defines the sorting order of the 3D objects. We also saw some of the limitations of the algorithms used to calculate these distances. While the default algorithms implemented by Away3D will correctly sort the 3D objects within a scene most of the time, these limitations can lead to situations where a scene is not rendered correctly.

One such situation was demonstrated, and a number of solutions were then provided that allow us to control the way in which Away3D sorts the scene including: In the next chapter, we will explore the various materials that can be applied to 3D objects. While this does allow us to view our 3D objects, it is a little boring.

Thankfully, Away3D includes over a dozen material types that can be used to display 3D objects with a huge variety of effects, with some of the materials using the Pixel Bender technology new to Flash Player 10 to create a level of detail that has not previously been seen in Flash applications. A texture is simply an image, like you would create in an image editing application like Photoshop or view in a web page.

Textures are then used by materials, which in Away3D are classes that can be applied to the surface of a 3D object. There are two ways of dealing with external files: ActionScript includes the Embed keyword, which can be used to embed external files directly inside a compiled SWF file.

There are a number of benefits to embedded resources: Alternatively, the external files can be saved separately and accessed at runtime, which has the following advantages: It prevents a number of possible errors due to unreliable networks and security restrictions, and produces a SWF file that is much simpler to distribute and publish. However, for applications where it is not possible to know what resources will be required beforehand, like a 3D image gallery, loading external resources is the only option.

You may also want to load external resources for applications where there is a large volume of data that does not need to be downloaded immediately, like a large game with levels that the player won't necessarily see in a single sitting.

Defining colors in Away3D The appearance of a number of materials can be modified by supplying a color. A good example is the WireColorMaterial material the same one that is applied to a 3D object when no material is specified , the fill and outline colors of which can be defined via the color and wirecolor init object parameters.

Colors can be defined in Away3D in a number of different formats. Common to all the formats is the idea that a color is made up of red, green, and blue component.

For example, the color purple is made up of red and blue, while yellow is made up of red and green. By integer Colors can be defined as an integer.

These int values are usually defined in their hexadecimal form, which looks like 0x12CD The characters that make up the int can be digits between 0 and 9, and characters between A and F.

You can think of the characters A through to F as representing the numbers 10 to 15, allowing each character to represent 16 different values. For each color component, 00 is the lowest value, and FF is the highest. The first two characters define the red components of the color, the next two define the green component, and the final two define the blue component. It is sometimes necessary to define the transparency of a color. This is done by adding two additional characters to the beginning of the hexadecimal notation, such as 0xFF12CD In this form, the two leading characters define the transparency, or alpha, of the color.

The last six characters represent the red, green, and blue components. Smaller alpha values make a color more transparent, while higher alpha values make a color more opaque. By string The same hexadecimal format used by integers can also be represented as a String.

The only difference is that the prefix 0x is left off. The MaterialsDemo applyColorMaterial function demonstrates the use of this color format. Away3D also recognizes a number of colors by name. These are listed in the following table. The programs written using Pixel Bender are known as kernels or shaders. This gives shaders the potential to be much faster. The term shader and kernel can be used interchangeably with respect to Pixel Bender. One of the advantages of using Away3D version 3.

The implementation of these shaders is largely hidden by the material classes that utilize them, meaning that they can be used much like the regular material classes, while at the same time offering a much higher level of detail. A common misconception is that Flash Player 10 uses the Graphics Processing Unit GPU , which is common to most video chipsets these days, to execute shaders.

This is incorrect. The effect of a light can only be seen on a material, and materials that can be illuminated will generally show up completely black without a light source.

Away3D includes three classes, all from the away3d. Unlike the point light, the intensity of the directional light does not diminish with distance. The intensity does decrease as the angle between the vector along which the directional light is shining and the surface it is shining on increases.

Ambient lights can be used to add a minimum amount of light to those materials that implement them. Only a subset of the materials available in Away3D can be illuminated, and those materials may only support a subset of the different types of lights. The following table lists those materials that can be lit, which types of lights they support, and whether the material can be illuminated by multiple light sources.

The material classes themselves will be covered in more detail later on in this chapter. The phong shading materials are a good example. The choice of what type of light source to use in your Away3D applications will usually be determined by your choice of material, and not the other way around. Shading techniques Away3D materials use a number of shading techniques, sometimes in combination, to achieve their end result.

These techniques can be used to apply a texture to the surface of a 3D object, illuminate a 3D object using an external light source, display a reflection of the surrounding environment, or simulate the appearance of a bumpy surface. It is used on its own to display a single texture, or it can be used in conjunction with the other shading techniques. The following image shows a sphere that uses texture mapping to display a single texture representing the Earth: This is done by using the information stored in an image called a normal map to calculate how each part of the material should be shaded.

This shading gives the impression of a bumpy surface.

3.6 essentials pdf away3d

Normal mapping has the benefit of adding depth detail without using additional polygons. A normal mapped low-polygon 3D object will generally be rendered faster than a high-polygon 3D object with a standard material, while maintaining much of the visual quality of the high-polygon 3D object.

A useful utility for creating normal maps can be found at http: This tool will create normal maps from a grayscale displacement map that can be applied to flat or spherical 3D objects. The following image is an example of a normal map that can be applied to a sphere: This effect is shown in the following image, where you can see how the sphere appears to have a rough surface.

From the angle in the screenshot, this roughness is especially apparent over the African continent. Reflecting the true surroundings of a 3D object on its surface is far too computationally expensive, but the effect can be approximated using a single texture, or a collection of textures that form a cube that appears to surround the 3D object.

Environment mapping is useful for creating the appearance of shiny 3D objects, like those with a polished or metallic surface. In the following image, the first two 3D objects have had a material applied that implements environment mapping reflecting a marble texture.

The torus on the left has applied an environment map over a base texture map, while the one in the middle has applied the environment map over a solid color. As a comparison the torus on the right has had a material applied to it that uses only texture mapping. The effect that is produced by environment mapping can be difficult to appreciate in a static screenshot, but it is immediately apparent as the 3D object moves relative to the camera. It is very quick to calculate, but since each triangle face is shaded as a whole it does tend to highlight the edges of a low-polygon 3D object.

The following sphere has been illuminated using flat shading. As you can see it is easy to discern each of the triangle faces that make up the sphere. Phong shading Phong shading will calculate the illumination of each pixel on the surface of a 3D object against a light source. This eliminates the sharp edges that can be produced by flat shading, but does so with a performance cost. The following sphere has been illuminated using phong shading.

Because each pixel is lit independently of the triangle faces, the end result is much smoother than the flat shading technique discussed previously.

To accommodate this, we will apply the various materials to the sphere, torus, cube, and plane primitive 3D objects in this demo. All primitives extend the Mesh class, which makes it the logical choice for the type of the variable that will reference instances of all four primitives. Mesh; The Cast class provides a number of handy functions that deal with the casting of objects between types.

Cast; As we saw previously, those materials that can be illuminated support point or directional light sources and sometimes both. To show off materials that can be illuminated, one of these types of lights will be added to the scene. DirectionalLight3D; import away3d. TextureLoadQueue; import away3d. TextureLoader; The various material classes demonstrated by the MaterialsDemo class are imported from the away3d. AnimatedBitmapMaterial; away3d.

BitmapFileMaterial; away3d. BitmapMaterial; away3d. ColorMaterial; away3d. DepthBitmapMaterial; away3d. Dot3BitmapMaterial; away3d. Dot3BitmapMaterialF10; away3d.

EnviroBitmapMaterial; away3d. EnviroColorMaterial; away3d. FresnelPBMaterial; away3d. MovieMaterial; [ ] Materials import import import import import import import import import import away3d.

PhongBitmapMaterial; away3d. PhongColorMaterial; away3d. PhongMovieMaterial; away3d. PhongMultiPassMaterial; away3d. PhongPBMaterial; away3d. ShadingColorMaterial; away3d. TransformBitmapMaterial; away3d. WhiteShadingBitmapMaterial; away3d. WireframeMaterial; These materials will all be applied to a number of primitive types, which are all imported from the away3d.

Torus; The CubFaces class defines a number of constants that identify each of the six sides of a cube. CubeFaces; The following Flash classes are used when loading textures from external image files, to handle events, to display a textfield on the screen, and to define a position or vector within the scene. Vector3D; flash. URLRequest; flash. BitmapData; flash. Here, we see how an external JPG image file, referenced by the source parameter, has been embedded using the Embed keyword.

Class; A number of additional images have been embedded in the same way. Class; Here we are embedding three SWF files. These are embedded just like the preceding images. Class; [ ] Materials A TextField object is used to display the name of the current material on the screen.

TextField; The currentPrimitive property is used to reference the primitive to which we will apply the various materials. Mesh; The directionalLight and pointLight properties each reference a light that is added to the scene to illuminate certain materials. DirectionalLight3D; protected var pointLight: PointLight3D; The bounce property is set to true when we want the sphere to bounce along the Z-axis.

This bouncing motion will be used to show off the effect of the DepthBitmapMaterial class.

Boolean; The frameCount property maintains a count of the frames that have been rendered while bounce property is set to true. Optionally, it can set the bounce property to true, which indicates that the primitive should bounce along the Z-axis. These functions all set the bounce property to false, as none of the materials that will be applied to these primitives gain anything by having the primitive bounce within the scene. If you have a directional light that is not being reflected off the surface of a lit material, leaving the direction property to this default value may be the cause.

Here we override the default to make the light point back to the origin. We also set the position of the camera back to the origin. This textfield will be used to display the name of the currently applied material. As such the x and y coordinates shown below relate to a 2D position on the screen, and not a 3D position within the Away3D scene. This is used to show off the effect produced by the DepthBitmapMaterial class.

The comment next to each case statement shows the key that the keyCode property relates to. The materials that are applied by these functions, the function itself, and a table showing the parameters that the materials accept are listed in the coming sections. Unlike the classes used to create the primitive 3D objects in Chapter 2, Creating and Displaying Primitives, which usually accepted a single init object as the constructor parameter, the Away3D material classes have constructors that accept a combination of regular parameters and an init object.

To distinguish between the two, the regular parameters will be shown in bold in the following tables. Basic materials The basic materials don't rely on a texture, and so can be used without having to load or embed any external resources.

This makes them easy to use, and they are great for quickly prototyping an application. WireColorMaterial For most of the previous demos, we have not specifically applied any particular material to the 3D objects. When no material is specified, Away3D will apply the WireColorMaterial material, which shades the 3D object with a solid color this color is randomly selected at runtime unless a specific color is supplied and draws the outline of the 3D objects triangle faces.

Here we will specifically create a new instance of the WireColorMaterial class and apply it to the 3D object. The color of the material has been specified by a String representing the color's name. Here we have defined the solid color to be dodgerblue using a String to define the color , while the color of the wireframe will be white defined by the int 0xFFFFFF with a width of two pixels. Those in bold are passed directly to the constructor, while the remaining parameters are passed in via an init object.

WireframeMaterial The WireframeMaterial only draws the outline of the triangle faces that make up the 3D object. In this example, the wireframe color has been specified using an int. This int value is equivalent to the dodgerblue color used in the applyWireColorMaterial function. This example shows the color being supplied as the string version of the dodgerblue hexadecimal representation. If the debug init object parameter is set to true, an instance of the ColorMaterial class will be drawn just like the WireColorMaterial class.

Bitmap materials Bitmap materials display a texture on the surface of a 3D object. In this example, the bitmap is created from the embedded image contained in the EarthDiffuse class.

A new instance of the BitmapMaterial class could also have been created using the following code: Precision correction triangles are drawn with blue outlines. The BlendMode class, from the flash. The bitmapData object to be used as the material's texture.