Volume rendering technology and voxel-based graphics

Development of the program, applying the technique of surface voxelization of 3D-models based on graphic processes. Review of existing methods voxelization 3D-scenes view of the art surface voxelization. Analysis and results of performance tests.

Рубрика Программирование, компьютеры и кибернетика
Вид дипломная работа
Язык английский
Дата добавления 11.07.2016
Размер файла 1,1 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://allbest.ru

Introduction

program voxelization graphic

Volume rendering technology and voxel-based graphics has been in use for a long time. This technology is widely implemented in different scientific fields: medicine, engineering, geology. But for a considerable amount of time there were not many followers of this technology in computer graphics sphere. The interest to voxel-based graphics appeared in mid-eighties of the 20th century with the first scientific articles on the topic. At that time there were two competing approaches for rendering: rasterization (which involved polygones) and raytracing (which involved voxels). The main point of rasterization was converting vector information into a raster format (pixels). The main point of raytracing, on the other hand, is “tracing the path of a ray of light through the 3D model to determine the pixel color” 1. While the latter method required lots of memory, the first one involved simple transformation operations, and that is why it got the support from hardware producers and became the most popular technique for producing real-time 3D computer graphics. Nevertheless, with the rise of computer scenes geometric complexity the interaction with such a large number of primitives has become very consumptive and the industry again started looking for the alternatives to rasterization.

Only few years ago the interest to voxel-based graphics was increased again by Cyril Crassin, who introduced in 2009 a voxel-based rendering pipeline for efficient exploration of large and detailed scenes called GigaVoxels. This system could render several billion voxels with the help of “a new volumetric prefiltered geometry representation and an associated voxel-based approximate cone tracing” 2. It also was deeply tied to GPU.

One year later in 2010 Samuli Laine and Tero Karras from NVIDIA Research presented another scientific article which increased the interest to raytracing rendering even higher. It was called “Efficient Sparce Voxel Octrees” and it introduced a new compact data structure for storing voxels and rendering them 3. This article made a huge impact on the computer graphics field of computer science -- from that moment voxels could be used not only for surface rendering but for the rendering of a whole 3D scene.

With the growing popularity of parallel computing a lot of new scientific papers propose parallel voxelization techniques using CUDA or OpenCL libraries. There are also so-called hybrid algorithms, which utilize both GPU and CPU to improve performance, but these algorithms are not as popular as parallel GPU ones. One of the most influential scientific papers, which present parallel approach for voxelization and ray-tracing, “Data-Parallel Octrees for Surface Reconstruction” was written by Kun Zhou, Minmin Gong, Xin Huang and Baining Guo from Microsoft Research Asia and State Key Lab of CAD&CG, Zhejiang University 6. The main novelty point of this work was proposing a new method for real-time octree construction relying on GPU only and using its parallelism. This work gave a significant boost to 3d reconstruction and tracking computer science field and was referenced by many researchers, including those who work on Microsoft KINECT software.

The main aim of the project, described in this paper, is to develop a program, which voxelizes 3D models. The main objectives are the following:

· To study existing voxelization techniques;

· To choose the most suitable voxelization algorithm;

· To choose the implementation tools (programming language, frameworks, and libraries);

· To implement the program and to conduct tests;

· To analyze the results of the project.

In the next section, the most popular voxelization techniques are going to be reviewed along with their advantages and disadvantages, and the most suitable one will be chosen for the project.

1. Voxelization Methods Overview

Voxel-based rendering technology is usually divided into 3 branches: surface voxelization, solid voxelization and sparse octree voxelization. All three voxelization techniques have been widely developed for the past 20 years, so, to conduct a proper research on the topic all three methods need to be reviewed.

1.1 Solid Voxelization

The first voxelization method is usually referred to as the most reliable and the oldest method of all. The main idea of solid voxelization is to mark all the voxels that lie in the interior of the model and render them. Unlike surface voxelization, it prevents rendering from presenting holes in 3D models when some of their polygons are at a grazing angle. That is why solid voxelization methods are often implemented for simulations, visibility computations, or path finding routines. Nevertheless, these methods consume a lot of memory for storing all the voxel data. Another serious disadvantage of the solid voxelization is the restriction for the 3D model itself. It must be watertight. In other words, initially there must be no holes in it, no inner walls, or inner enclosed parts.

Fig. 1 Not watertight models: with holes (A), with inner walls (B), with inner enclosed parts (C).

The most common method of solid voxelization rendering applies the definition of a watertight model for marking the voxels inside the model 4. For each voxel in the voxel grid it counts the number of model fragments that lie in front of its center. If the number is even, the voxel is outside the model; if it is odd, the voxel is inside. Voxel marking is usually performed with XOR operations on voxel grid bit masks.

Fig. 2 The sliced image of a 3d model representing the solid voxelization method.

The objects are voxelized along camera view direction row by row inside the voxel grid (Fig. 3). Each row is presented by a bitmask of two states: the cell either contains the object edge or not.

Fig. 3 Initial solid voxelization algorithm frame.

First, the bitmask is calculated for the first edge (fragment) on the way from the view direction. Only those voxels are marked that lie in front of the fragment. The resulting bitmask at first contains only zeros, but after finding the first edge the current bitmask and the resulting one are intersected by a XOR operation (Fig. 4).

Fig. 4 “Finding the first edge” frame.

After filling the bitmask up to the second fragment the current bitmask again is intersected with the resulting one with a XOR operation (Fig. 5).

Fig. 5 “Finding the second edge” frame.

The same technique is applied for the third fragment (Fig. 6).

Fig. 6 “Finding the third edge” frame.

Finally, after filling and accumulating the last bitmask to the resulting one the voxel grid row is completed. If each object intersection has its pair and the final number of intersections is even, the final result is correct and the model is watertight (Fig. 7).

Fig. 7”Filling final edge and the resulting bitmask” frame.

1.2 Sparse octree voxelization

Another voxel-based rendering method is very close to solid voxelization in terms of presenting not the surface but the “insides” of 3D models, but its difference from the latter is basically in voxels representation. Sparse octree voxelization can also be considered as the advanced solid voxelization method. The main advantage of this technique is saving a lot of memory while storing voxel data. The main point of sparse octree voxelization refers to representing each voxel as a node of an octree data structure. Starting from the root node, which represents the voxel of the size of the whole voxel grid, the node is being divided by eight children nodes. If a node contains the contours of the model, it is being divided again. According to the paper of Karras and Laine 3, the voxel data is stored in blocks (contiguous areas of memory). Each block contains information about octree topology, voxel geometry and additional shading information (such as voxel color). Because of storing children information inside the blocks, there is no need for allocating memory for octree leafs. Also, this data structure boosts the raytracing performance.

Fig. 8 Two-dimensional sliced representation of the sparse octree voxelization method.

There are also several techniques for parallel octree construction using parallel computations. One of such methods, which has already been mentioned in the introduction section 6, uses level-order traversals for paralleling and stores not only the information about the nodes, but also about each neighbor of those nodes, making it even faster to raytrace the octree.

The memory consumption of the sparse octree voxelization technique is dictated not by the voxel grid resolution, but mostly by the octree depth. Both these factors influence the quality of the output voxelized model, but in case with the octree technique, the number of cloud points in the voxelized space does not matter as much as it matters for simple solid and surface voxelization algorithms.

1.3 Surface voxelization

Finally, the surface voxelization method refers to rendering only the contours of the 3D model ignoring all its insides. The main idea of this method refers to setting a voxel only if its center is overlapped by a triangle. The main advantage of this method is saving lots of memory when storing voxel data, though, as we have already mentioned earlier, some of the surface voxelization algorithms present holes on the voxelized model. This is caused by some polygons on a 3D model being placed at a grazing angle from the camera view. But this problem can be easily solved by expanding the processed triangles with conservative rasterization method.

Fig. 9 The sliced image of a 3d model representing the surface voxelization method.

In the early 2000s, surface voxelization algorithms proposed by the researchers required multiple passes (processing the volume slice by slice and rebuilding the output geometry with each pass) and suffered from bad output quality. These problems were the result of utilizing the capabilities of old graphics hardware. Later, with the hardware development, new algorithms were presented: they enabled the processing of several slices at a time, reducing considerably the number of passes, but the output quality again was limited by the data structure for storing voxel grids: it was binary, so each occupied voxel was represented only by one bit. This approach made it impossible to store any other information about a voxel, mostly storing its color. 7

When graphics hardware manufacturers presented the GPU hardware rasterizer implemented inside their products, a new wave of GPU-based voxelization techniques appeared. The main goal of exploiting the GPU hardware rasterizer is to use its point-in-triangle test function and sampling operation, which are the core functions in surface voxelization algorithms. Providing such functions on the hardware side make the whole voxelization work much faster and more effectively.

Among all three described voxelization techniques, the most suitable for the project is the surface voxelization. It is considered to be the most effective for GPU implementation, saves a lot of memory while storing voxel data (in comparison with solid voxelization), and works well with animated 3D models.

In the next section, the implemented algorithm of surface voxelization will be described.

2. Surface Voxelization Using Graphics Pipeline

The surface voxelization technique presented in this paper works totally on GPU side and utilizes not only GPU hardware rasterizer, but also OpenGL 4.2+ image load/store interface. That is why the whole voxelization process will take place inside shaders. The whole process is divided into two parts: voxelization and rendering. These two parts work as separate shader programs, one after another.

2.1 Voxelization pipeline

The voxelization graphic pipeline consists of three steps:

· Vertex shader;

· Geometry shader;

· Fragment shader;

These steps convert all the initial data (vertices, normal, texture coordinates and triangle indices) into a 3D texture which represents our voxel grid. Then, to render the voxel grid, the ray tracing procedure must be completed. For this purpose another set of shaders is being utilized:

· Vertex shader;

· Fragment shader;

To start rendering a 3D model, the orthographic camera with the dimensions of the voxel grid must be placed in front of it. The model must be placed in the center of the camera view and scaled to the size of the voxel grid.

Then the next three attributes must be set for the vertex shader of the voxelization set:

· Projection matrix;

· Model matrix;

· View matrix;

In the vertex shader we apply the positioning of the model vertices in our voxel space. Next, from the vertex shader we send the transformed values of model vertices and normals to the geometry shader of the voxelization set. Also we pass texture coordinates unchanged to the next shader.

The next attributes will be used for the geometry shader along with the vertices, normals, and texture coordinates for each triangle:

· Projection matrix;

· View matrix;

· Pixel diagonal;

Pixel diagonal value can be calculated with the next formula:

In this formula, “PdScalar” represents a pixel diagonal scalar value, which in this case is set to 1.0, and “VoxelGridDimensionSize” represents the one dimension length of our voxel cube in voxels. Its value is set by the user and must be of a power of two. First of all, in the geometry shader the input triangle must be swizzled to ensure a gap-free voxelization. Gaps in the voxelized model are the result of overly oblique camera angles. So, for this purpose, the input geometry is changed by the eye space normal dominant direction 5. In other words, we choose the plane of maximal projection for the input triangle. The image below (Fig. 10) visualizes the process of swizzling. In it, during this process, the input geometry from the x-axis view is chosen and is going to be processed.

Fig. 10 The process of swizzling the input geometry 5.

Before processing the fragments of the input 3D model, the conservative rasterization technique must be applied to be completely aware there will be no gaps in the rendered voxel model. Conservative rasterization process makes sure that every pixel touching the input triangle will be rasterized 5.

The following process is being presented in the image below (Fig. 11). First, in this approach, screen coordinates for the triangle (V0V1V2) are calculated. Then, enlarged by a pixel diagonal, screen space bounding box is also calculated. It will be used later in the fragment shader for clipping. After that, the triangle vertices are moved to expand the input triangle for conservative voxelization (V`1V`2V`3) 7.

Fig. 11 The conservative rasterization method for input triangles 5

Then, with the previously mentioned screen space bounding box, the dilated triangle is sent to the fragment shader, where all the fragments of the triangle outside this bounding box are discarded. And, if the fragment is inside the bounding box, the Phong shading and the texture color are saved to the output 3D texture. The alternative to this method is computing and processing the exact bounding polygon in the geometry shader, which is much slower than conservative rasterization.

The overall scheme of the voxelization pipeline is presented below (Fig. 12). In the image all the vertex, normal, index, and texture information is being sent through all three shaders and being transformed in fragment shader into output 3D texture by the image load/store interface, provided by OpenGL 4.2+ functionality.

Fig. 12 The voxelization pipeline

2.2 Rendering pipeline

After processing the model voxelization pipeline all that is left to be done is raytracing the 3D texture. For this purpose, the shader program with two shaders is used:

· Vertex shader;

· Fragment shader;

To proceed, the next attributes are needed to be sent to the raytracing fragment shader:

· Inverted ModelViewProjection matrix;

· 3D texture;

· Grid size;

· Step size;

First, for each ray, origin and direction from pixel coordinate are calculated. Then, ray components are checked to be parallel to axes with the precision of 0.00001. After that, the Axis-aligned Bounding Box (AABB) volume test is performed to check if the ray touches the bounding box of the volume. If it does, shader traverses through voxels, until the ray exits the volume, and writes the voxels color to the framebuffer. If it does not, an empty pixel is written to the framebuffer.

In the end, the result is drawn on a simple quad (the size of the screen) at the scene in front of the camera.

Two main steps of the implemented surface voxelization technique were described in this section: voxelization pipeline and rendering pipeline. Both steps are processed fully on GPU side.

In the next section, the preparation process for the project implementation, the choice of implementation tools, and the aspects of the project implementation process are described. And, in the end, conducted tests are presented.

3. Implementation and tests

The first and the main preparation before starting the project implementation was to choose an application programming interface for rendering graphics. Between two main graphic platforms, OpenGL and DirectX, it was decided to use OpenGL for the implementation of the introduced voxelization technique. Both APIs offer all needed functionality: 3D textures, GPU hardware rasterizer support. So, the choice was made considering other factors. The most important one was portability: OpenGL is a cross-platform API, and DirectX is Windows-only. Another important factor was the ease of use, by which, again, OpenGL was chosen.

Another important step before implementing the program was to choose the programming language. There were two main options: C++ and standard OpenGL libraries, or C# and OpenTK framework, which is a wrapper of standard OpenGL libraries for C# programming language. But since it was decided to make the program as close to be ported to other platforms as possible, the choice fell on C++. Also, using C++ with standard libraries tends to provide faster and more stable performance in comparison to wrappers.

The next additional libraries were used to implement the 3D model voxelization program:

· Boost 1.46.0 -- this set of C++ libraries provides crossplatform high-level support for different tasks and data structures. Only one feature from these libraries is used in the project to simplify files loading routine (meshes, animations, textures) -- FileSystem.

· SOIL (Simple OpenGL Image Loader) -- a small C library for loading images of the most popular formats (BMP, JPEG, PNG, etc.). It was used for loading 3D models textures in .TGA format. Also, this library offers automatic ways to generate an OpenGL texture directly from file, but since those features use functions that were deprecated in the latest versions of OpenGL, it was decided to load only image data with the help of SOIL library and then generate textures manually.

For the purposes of experimenting with animated 3D models, MD5 animation file format was chosen. This format is not widely used in computer graphics, though it tends to be a common one. The most popular example of where this animation format was used is a video game DOOM 3, developed by ID Software.

· The MD5 format for storing 3D models was chosen instead of other formats for several reasons:

· It stores mesh and animation data in different files. The data is stored as ASCII plain text, so it is easy to parse it.

· MD5 animation data structure is fairly simple to process.

The animated 3D model usually consists of two files: one for model meshes, another one for a certain animation. Different animation files can be applied to one mesh file.

The basic mesh file data structure starts with defining MD5 version and writing specific commands inside the “commandline” line:

MD5Version <int>

commandline <string>

Then number of joints and meshes is defined for the 3D model:

numberOfJoints <int>

numberOfMeshes <int>

After that, all joints are written inside the “joints” field: first, a name of a joint, then its parent index, and after that, its position and orientation.

joints {

<string *name*> <int *parentIndex*> ( <vec3 *position*> )

( <vec3 *orientation*> )

}

After defining the joints of the 3D model, each mesh must be defined in each “mesh” field. The “shader” field basically refers to the filepath of the mesh texture. Then, after writing the total vertices count, all the vertices are written with their texture coordinates and weight parameters. Triangles are written down the same way by specifying their count first. And, finally, all the weights are defined with the parameters specified below.

mesh {

shader <string>

numverts <int:numVerts>

vert <int:vertexIndex> ( <vec2:texCoords> ) <int:startWeight> <int:weightCount>

numberOfTriangles <int >

triangle <int *triangleIndex*> <int *vertexIndex0*> <int *vertexIndex1*>

<int *vertexIndex2*>

numberOfWeights <int>

weight <int *weightIndex*> <int *jointIndex*> <float *weightBias*>

( <vec3 *weightPosition*> )

}

The structure of a MD5 animation file starts, again, from specifying MD5 version and commands in the “commandline” field. Then the next global 3D model animation parameters are defined: number of frames, the model rig joints count, animation framerate, and the number of animated components:

MD5Version <int >

commandline <string >

numberOfFrames <int >

numberOfJoints <int >

frameRate <int >

numberOfAnimatedComponents <int >

After that, the hierarchy of all joints is declared inside the “hierarchy” fields -- each joint from a new line:

hierarchy {

<string *jointName*> <int *parentIndex*> <int *flags*> <int *startIndex*>

}

Next, the bounding boxes of the model are defined for each frame in the “bounds” field. And then, the default position and orientation is defined for each frame in the “baseframe” field:

bounds {

( vec3 *boundMin* ) ( vec3 *boundMax *)

}

baseframe {

( vec3 *position* ) ( vec3 *orientation* )

}

Finally, each frame is specified in its own “frame” field with its number. Inside, the frame data is written in the form of floating-point numbers:

frame <int:frameNum> {

<float:frameData>

}

The main class of our program can be considered the 3D model class. It contains all the data parsed from MD5mesh file (vertices, normals, indices, texture coordinates, etc.) and also stores the Animation class object, which has all the data parsed from the MD5animation file. Moreover, the 3D model class contains all the voxelization and render methods, which are called from the initial program class by the Update function.

To handle voxelization process, we create three vertex buffer objects (VBO) for vertices, normals, and texture coordinates. These VBOs are then bound to the voxelization shader program as input data.

Before the model rendering starts, the initialization process is handled. The model animation is reset to a default state, the VBOs are created and linked, and the voxelization and raytracing shader programs are loaded and linked. Then the output 3D texture and a framebuffer for it are created.

Then, two rendering methods are being called in the update cycle: “VoxelizeScene” and “RenderVoxels”. Each of them handle its shader program. Also, the model animation states are being updated, so the VBOs data is being changed each frame.

To analyze performance of the implemented voxelization technique, it was decided to conduct two tests. The first one tests the voxelization technique by the model geometry complexity. The results are presented in frames per second (FPS). The second test checks the efficiency of voxelizing animated 3D models. The results are also presented in frames per second. Both tests were conducted on a PC with the next system specifications:

· Intel(R) Core(TM) i5-2410M_CPU_@_2.30GHz

· RAM 4 GB

· NVIDIA GeForce 410M

For the first test on the GPU-based surface voxelization technique, three 3D models were chosen: Stanford bunny, Stanford happy Buddha, and Stanford dragon. All three models are widely used for various computer graphics experiments, and voxelization algorithms are among them. None of the models have textures, instead their voxels are colored in red, green, and blue colors to show the “swizzling” process in the surface voxelization pipeline (Fig. 13).

The following results are obtained with the voxelgrid resolution of 256:

Stanford Bunny (69,451 triangles)

~20.00 FPS

Stanford Happy Buddha (1,087,716 triangles)

~10.00 FPS

Stanford Dragon (871,414 triangles)

~12.00 FPS

Fig. 13 Voxelized Stanford bunny with voxels marked in red, green, and blue colors to show the “swizzling” process.

The second test measures performance of voxelizing an animated and textured 3D model. It`s important to note that this test was conducted not to compare animated 3D models among themselves, as it was for the first test, but to compare performance of voxelizing static 3D objects with animated ones. For this purpose, the “Bob” MD5 model with one animated “Watch” cycle was chosen (Fig. 14). The voxel grid size, again, was 256x256x256. The test showed some far from positive results: approximately 5 frames per second. These results can be explained by the need to refresh and update every buffer with each animation frame.

To improve the implemented performance the next approach is suggested -- the model animation can be prerendered into “frame” 3D textures with the given framerate. This approach will need more memory space, but will considerably reduce the load on the graphic processor, since there will be no need to voxelize the model continuously.

Fig. 14 The screenshot of the voxelized and animated “Bob With Lamp”.

Fig. 15 A closer look on voxels.

The final section of the paper contains the results analysis and the future work predictions.

4. Results and Future Work

By the end of the project implementation, the most popular voxelization methods were studied and analyzed and the most suitable one was chosen along with needed development tools. The GPU-based surface voxelization technique was implemented and performance tests were conducted and analyzed. The results of the conducted tests showed that algorithm works with non-animated, non-textured, high poly 3D models at average 14 FPS, but shows lower results with animated and textured low poly ones -- at average 3-5 FPS. But a new approach is proposed to reduce the load on a graphic processor -- to store prerendered animation 3D textures without voxelizing frames continuously.

The future work is to continue the research on the voxelization techniques to improve the performance of the current one with the help of parallel computations. Also, in addition to 3D-model voxelization, the problems of voxelized models animation will be analyzed, and answers to these problems will be suggested.

Bibliography

1. Goosen, C. (2013). GPU-Based Sparse Voxel Octree Raytracing for Rendering of Procedurally Generated Terrain University of Cape Town, Department of Computer Science

2. Cyril Crassin (2009). GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes. Doctoral Thesis, University of Grenoble, France.

3. Laine, S., Karras T. (2010). Efficient Sparse Voxel Octrees. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D).

4. Decoret, X.. Eisemann E. (2008). Single-pass GPU Solid Voxelization and Applications. Canadian Information Processing Society.

5. Rauwendall R., Bailey M. (2013). Hybrid Computational Voxelization

Using the Graphics Pipeline. Oregon State University.

6. Zhou K., Gong M., Huang X., Guo B. (2011) Data-Parallel Octrees for Surface Reconstruction. State Key Lab of CAD&CG, Zhejiang University, Microsoft Research Asia.

7. Crassin, C., Green, S. (2012) Octree-based sparse voxelization using the gpu hardware rasterizer. In OpenGL Insights, P. Cozzi and C. Riccio, Eds. A K Peters/CRC Press, Boston, MA.

Размещено на Allbest.ru


Подобные документы

  • Creation of the graphic program with Visual Basic and its common interface. The text of program code in programming of Visual Basic language creating in graphics editor. Creation of pictures in Visual Basic, some graphic actions with graphic editor.

    лабораторная работа [1,8 M], добавлен 06.07.2009

  • Description of a program for building routes through sidewalks in Moscow taking into account quality of the road surface. Guidelines of working with maps. Technical requirements for the program, user interface of master. Dispay rated pedestrian areas.

    реферат [3,5 M], добавлен 22.01.2016

  • Program automatic system on visual basic for graiting 3D-Graphics. Text of source code for program functions. Setting the angle and draw the rotation. There are functions for choose the color, finds the normal of each plane, draw lines and other.

    лабораторная работа [352,4 K], добавлен 05.07.2009

  • Review of development of cloud computing. Service models of cloud computing. Deployment models of cloud computing. Technology of virtualization. Algorithm of "Cloudy". Safety and labor protection. Justification of the cost-effectiveness of the project.

    дипломная работа [2,3 M], добавлен 13.05.2015

  • Lists used by Algorithm No 2. Some examples of the performance of Algorithm No 2. Invention of the program of reading, development of efficient algorithm of the program. Application of the programs to any English texts. The actual users of the algorithm.

    курсовая работа [19,3 K], добавлен 13.01.2010

  • Technical methods of supporting. Analysis of airplane accidents. Growth in air traffic. Drop in aircraft accident rates. Causes of accidents. Dispatcher action scripts for emergency situations. Practical implementation of the interface training program.

    курсовая работа [334,7 K], добавлен 19.04.2016

  • The material and technological basis of the information society are all sorts of systems based on computers and computer networks, information technology, telecommunication. The task of Ukraine in area of information and communication technologies.

    реферат [29,5 K], добавлен 10.05.2011

  • Technical and economic characteristics of medical institutions. Development of an automation project. Justification of the methods of calculating cost-effectiveness. General information about health and organization safety. Providing electrical safety.

    дипломная работа [3,7 M], добавлен 14.05.2014

  • Program of Audio recorder on visual basic. Text of source code for program functions. This code can be used as freeware. View of interface in action, starting position for play and recording files. Setting format in milliseconds and finding position.

    лабораторная работа [87,3 K], добавлен 05.07.2009

  • Program game "Tic-tac-toe" with multiplayer system on visual basic. Text of source code for program functions. View of main interface. There are functions for entering a Players name and Game Name, keep local copy of player, graiting message in chat.

    лабораторная работа [592,2 K], добавлен 05.07.2009

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.