Quantcast
Channel: Game Development
Viewing all 289 articles
Browse latest View live

Krita* Gemini* - Twice as Nice on a 2-in-1

$
0
0

Download PDF

Why 2-in-1

A 2 in 1 is a PC that transforms between a laptop computer and a tablet. Laptop mode (sometimes referred to as desktop mode) allows a keyboard and mouse to be used as the primary input devices. Tablet mode relies on the touchscreen, thus requiring finger or stylus interaction. A 2 in 1, like the Intel® Ultrabook™ 2 in 1, offers precision and control with multiple input options that allow you to type when you need to work and touch when you want to play.

Developers have to consider multiple scenarios in modifying their applications to take advantage of this new type of transformable computer. Some applications may want to keep the menus and appearance nearly identical in both modes. While others, like Krita Gemini for Windows* 8 (Reference 1), will want to carefully select what is highlighted and made available in each user interface mode. Krita is a program for sketching and painting, that offers an end-to-end solution for creating digital painting files from scratch (Reference 2). This article will discuss how the Krita developers added 2 in 1 mode-awareness - including implementation of both automatic and user-selected mode switching and some of the areas developers should consider when creating applications for the 2 in 1 experience to their applications.

Introduction

Over the years, computers have used a variety of input methods, from punch cards to command lines to point-and-click. With the adoption of touch screens, we can now point-and-click with a mouse, stylus, or fingers. Most of us are not ready to do everything with touch, and with mode-aware applications like Krita Gemini, we don’t have to. 2 in 1s, like an Intel® Ultrabook™ 2 in 1, can deliver the user interface mode that gives the best experience possible, on one device.

There are multiple ways that a 2 in 1 computer can transform between laptop and tablet modes (Figure 1 & Figure 2). There are many more examples of 2 in 1 computers on the Intel website (Reference 3). The computer can transform into tablet-mode from laptop-mode by detaching the screen from the keyboard or using another means to disable the keyboard and make the screen the primary input device (such as folding the screen on top of the keyboard). Computer manufacturers are beginning to provide this hardware transition information to the operating system. The Windows* 8 API event, WM_SETTINGCHANGE and the “ConvertibleSlateMode” text parameter, signal the automatic laptop to tablet and back to laptop mode changes. It is also a good idea for developers to include a manual mode change button for users’ convenience as well.

Just as there are multiple ways that the 2 in 1 can transform between laptop and tablet modes, software can be designed in different ways to respond to the transformation. In some cases it may be desirable to keep the UI as close to the laptop mode as possible, while in other cases you may want to make more significant changes to the UI. Intel has worked with many vendors to help them add 2 in 1 awareness to their applications. Intel helped KO GmBH combine the functionality of their Krita Touch application with their popular Krita open source painting program (laptop application) in the new Krita Gemini application. The Krita project is an active development community, welcoming new ideas and maintaining quality support. The team added the mechanisms required to provide seamless transition from the laptop “mouse and keyboard” mode to the touch interface for tablet mode. See Krita Gemini’s user interface (UI) transformations in action in the short video in Figure 3.


Figure 3: Video - Krita Gemini UI Change – click icon to run

Create in Tablet Mode, Refine in Laptop Mode

The Gemini team set out to maximize the user experience in the two modes of operation. In Figure 4 & Figure 5 you can see that the UI changes from one mode to the other are many and dramatic. This allows the user to seamlessly move from drawing “in the field” while in tablet mode to touch-up and finer detail work when in laptop mode.


Figure 4:Krita Gemini tablet user interface


Figure 5: Krita Gemini laptop user interface

There are three main steps to making an application transformable between the two modes of operation.

Step one; the application must be touch aware. We were somewhat lucky in that the touch-aware step was started well ahead of the 2 in 1 activity. Usually this is a heavier lift than the transition to and from tablet mode work. Intel has published articles on adding touch input to a Windows 8 application (Reference 4).

Step two, add 2 in 1 awareness. The first part of the video (Figure 3) above demonstrates the automatic, sensor activated mode change, a rotation in this case (Figure 6). After that the user-initiated transition via a button in the application is shown (Figure 7).


Figure 6:Sensor-state activated 2 in 1 mode transition


Figure 7:Switch to Sketch transition button – user initiated action for laptop to tablet mode

Support for automatic transitions requires the sensor state to be defined, monitored, and appropriate actions to be taken once the state is known. In addition, a user initiated mode transition3 should be included as a courtesy to the user should she wish to be in the tablet mode when the code favors laptop mode. You can reference the Intel article “How to Write a 2-in-1 Aware Application” for an example approach to adding the sensor-based transition (Reference 5). Krita’s code for managing the transitions from one mode to the other can be found in their source code by searching for “SlateMode” (Reference 6). Krita is released under a GNU Public License. Please refer to source code repository for the latest information (Reference 7).

// Snip from Gemini - Define 2-in1 mode hardware states:

#ifdef Q_OS_WIN
#include <shellapi.h>
#define SM_CONVERTIBLESLATEMODE 0x2003
#define SM_SYSTEMDOCKED 0x2004
#endif

Not all touch-enabled computers offer the automatic transition, so we suggest you do as the Krita Gemini team did here and include a button in your application to allow the user to manually initiate the transition from one mode to the other. Gemini’s button is shown in Figure 7. The button-initiated UI transition performs the same functions as the mechanical-sensor-initiated transition. The screen information and default input device will change from touch & large icons in tablet mode to keyboard, mouse and smaller icons in the laptop mode. However, since the sensor path is not there the button method must perform the screen, icon, and default input device changes without the sensor-state information. Therefore, developers should provide a path for the user to change from one mode to the other with touch or mouse regardless of the state of the button-initiated UI state in case the user chooses an inappropriate mode.

The button definition - Kaction() - as well as its states and actions are shown in the code below (Reference 6):

// Snip from Gemini - Define 2-in1 Mode Transition Button:

         toDesktop = new KAction(q);
         toDesktop->setEnabled(false);
         toDesktop->setText(tr("Switch to Desktop"));
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchDesktopForced()));
         connect(toDesktop,
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchToDesktop()));
sketchView->engine()->rootContext()->setContextProperty("switchToDesktop
sketchView->Action", toDesktop);

Engineers then took on the task of handling the events triggered by the button. Checking the last known state of the system first since the code cannot assume it is on a 2-in-1 system, then changing the mode. (Reference 6):

// Snip from Gemini - Perform 2-in1 Mode Transition via Button:

#ifdef Q_OS_WIN
bool MainWindow::winEvent( MSG * message, long * result ) {
     if (message && message->message == WM_SETTINGCHANGE && message->lParam)
     {
         if (wcscmp(TEXT("ConvertibleSlateMode"), (TCHAR *) message->lParam) == 0)
             d->notifySlateModeChange();
         else if (wcscmp(TEXT("SystemDockMode"), (TCHAR *) 
message->lParam) == 0)
             d->notifyDockingModeChange();
         *result = 0;
         return true;
     }
     return false;
}
#endif

void MainWindow::Private::notifySlateModeChange()
{
#ifdef Q_OS_WIN
     bool bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);

     if (slateMode != bSlateMode)
     {
         slateMode = bSlateMode;
         emit q->slateModeChanged();
         if (forceSketch || (slateMode && !forceDesktop))
         {
             if (!toSketch || (toSketch && toSketch->isEnabled()))
                 q->switchToSketch();
         }
         else
         {
                 q->switchToDesktop();
         }
         //qDebug() << "Slate mode is now"<< slateMode;
     }
#endif
}

void MainWindow::Private::notifyDockingModeChange()
{
#ifdef Q_OS_WIN
     bool bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0);

     if (docked != bDocked)
     {
         docked = bDocked;
         //qDebug() << "Docking mode is now"<< docked;
     }
#endif
}

Step three, fix issues discovered during testing. While using the palette in touch or mouse mode is fairly easy, the workspace itself needs to hold focus and zoom consistent with the user’s expectations. Therefore, making everything bigger was not an option. Controls got bigger for touch interaction in tablet mode, but the screen image itself needed to be managed at a different level as to keep an expected user experience. Notice in the video (Figure 3) the image in the edit pane stays the same size on the screen from one mode to the other. This was an area that took creative solutions from the developers to reserve screen real estate to hold the image consistent. Another issue was that an initial effort had both UIs running which adversely affected performance due to both UIs sharing the same graphics resources. Adjustments were made in both UIs to keep the allotted resource requirements as distinct as possible and prioritize the active UI wherever possible.

Wrap-up

As you can see, adding 2 in 1 mode awareness to your application is a pretty straightforward process. You need to look at how your users will interact with your application when in one interactive mode versus the other. Read the Intel article “Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs“ for more information on creating an application with a transforming user interface (Reference 8). For Krita Gemini, the decision was made to make creating drawings and art simple while in tablet mode and add the finishing touches to those creations while in the laptop mode. What can you highlight in your application when presenting it to users in tablet mode versus laptop mode?

References

  1. Krita Gemini General Information
  2. Krita Gemini executable download (scroll to Krita Gemini link)
  3. Intel.com 2 in 1 information page
  4. Intel Article: Mixing Stylus and Touch Input on Windows* 8 by Meghana Rao
  5. Intel Article: How to Write a 2-in-1 Aware Application by Stephan Rogers
  6. Krita Gemini mode transition source code download
  7. KO GmbH Krita Gemini source code and license repository
  8. Intel® Developer Forum 2013 Presentation by Meghana Rao (pdf) - Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs
  9. Krita 2 in 1 UI Change Video on IDZ or YouTube*

About the Author

Tim Duncan is an Intel Engineer and is described by friends as “Mr. Gidget-Gadget.” Currently helping developers integrate technology into solutions, Tim has decades of industry experience, from chip manufacturing to systems integration. Find him on the Intel® Developer Zone as Tim Duncan (Intel)

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013-2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

 

  • 2-in-1
  • Krita Gemini
  • Developers
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • UX
  • Windows*
  • Game Development
  • Graphics
  • Sensors
  • Touch Interfaces
  • User Experience and Design
  • Laptop
  • Tablet
  • URL

  • Using the Beacon Mountain Toolset and NDK for Native App Development

    $
    0
    0


    Download as PDF

    Download Source Code

    Summary: The goal of this project is to demonstrate how easy it is to build native Android apps with the Beacon Mountain toolset and Intel NDK.  We will do this by building a simple game. We will walk through the steps of installing tools with Beacon Mountain, building the game, and testing it with the Intel® Hardware Accelerated Execution Manager (Intel® HAXM) emulator. Commented source code is also available.

    Installing Beacon Mountain

    Beacon Mountain is a one-click install for most of the tools needed for developing Android* applications, including Eclipse* and the Android SDK and NDK. This can save hours or even days of downloading, building, and installing different packages and development tools.

    Install Beacon Mountain from here: http://software.intel.com/en-us/vcsource/tools/beaconmountain

    Creating the project

    1. Open Eclipse ADT and create a new workspace called MazeGame.



      Click the New Android Application button and set the project name to MazeGame. Change all API levels to API 17: Android 4.2.



      Click Next, accepting all default settings, until the Finish button appears, then click it.
       
    2. Since we are creating an app that will involve native C++ code, we need to set the NDK location. Click Window->Preferences and expand the Android menu. Browse to the location of your Beacon Mountain install folder, select the NDK folder inside it, and click OK.


       
    3. To enable native C++ compilation, right-click the project, and select Android Tools->Add Native Support.



      Accept the default library name by clicking Finish.
       
    4. By default, our project will only build for ARM devices. To enable building for x86 devices, we'll need to create an Application.mk file alongside our Android.mk in the /jni folder and add the following.
    APP_ABI := x86 armeabi
    APP_STL := stlport_static
    

    After building, you should see armeabie and x86 folders inside MazeGame/MazeGame/bin.

    Game Structure

    Although there are many good ways to structure our game, we'll start with the simplest possible format:

    • A nearly empty activity that loads a view.
    • A view that extends GLSurfaceView. We'll call into our native code from here to render each frame.
    • A C++ MazeGame class that will manage all the game objects, the physics engine, communication with the Java* wrapper and OpenGL* setup.
    • A C++ GameObject class that will manage object position, 3D model parsing, and drawing itself.

    Calling Native C++ Code From Java

    To call native code, we'll need to load our library (the one we configured when we created the Project) at the end of our view file.

    static {
            System.loadLibrary("MazeGame");
        }
    

    Note that the actual library (inside the lib/x86 folder) will be called libMazeGame.so, not MazeGame.so.

    We'll also need to define Java versions of the native functions we'll be calling:

        public native void init(int rotationDegrees);
        public native void restart();
        public native void setRotation(int degrees);
        public native void loadResources(Bitmap circuitBoardBitmap, Bitmap componentsBitmap, Bitmap stripesBitmap, Bitmap ballBitmap);
        public native void resize(int width, int height);
        public native void renderFrame(double timeStepSeconds, double currTimeSeconds);
        public native void accelerometerChanged(float x, float y);
        public native void deinit();
    
    

    Finally, we'll need to define these functions in MazeGame.cpp. Native code functions require a very unique format to be externally callable:

    JNIEXPORT void JNICALL Java_com_example_mazegame_MazeGameView_init(JNIEnv* env, jobject thisClazz, int rotationDegrees){
    gameInst = new MazeGame(env, thisClazz, rotationDegrees);
    gameInst->restart();
    }
    

    Notice the function name. It starts with the full classpath of the Java file that will be calling into it. Also, the first two arguments are passed in by the system, so they are required, and there are no matching parameters for them on the Java side.

    Because this is C++ and not C, we'll need to add EXTERN C linkage for them above the function definitions.

    extern "C" {
    JNIEXPORT void JNICALL Java_com_example_mazegame_MazeGameView_init(JNIEnv* env, jobject obj, int rotationDegrees);
    

    Calling Java From C++

    Some tasks, like playing sounds and opening dialogs, are best done in Java, so we'll need a way to call back out from our native C++ code. In our constructor, we'll save references to the calling class and the PlaySound method on that class:

    MazeGame::MazeGame(JNIEnv* env, jobject clazz, int rotationDegrees)
    {
    _environment = env;
    _callingClass = (jclass)(env->NewGlobalRef(clazz));
    jclass viewClass = env->FindClass("com/example/mazegame/MazeGameView");
    _playSoundMethodID = env->GetMethodID(viewClass, "PlaySound", "(Ljava/lang/String;)V");
    _showGameOverDialogMethodID = env->GetMethodID(viewClass, "ShowGameOverDialog", "()V");
    
    Then, when we are ready to play a sound, we can simply call the saved reference:
    
    void MazeGame::playSound(const char* soundId){
    jstring jstr = _environment->NewStringUTF(soundId);
        _environment->CallVoidMethod(_callingClass, _playSoundMethodID, jstr);
    }
    
    

    Integrating Box2D

    One of the best things about the NDK is that it allows development teams to use existing C++ libraries, such as the well-known Box2D physics engine. After downloading and unzipping Box2D, move it into the jni folder. We'll also need to link in all of the Box2D libraries in our jni/Android.mk file:

    LOCAL_PATH:= $(call my-dir)
    
    include $(CLEAR_VARS)
    
    LOCAL_MODULE    := maze-game
    FILE_LIST := $(wildcard $(LOCAL_PATH)/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Collision/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Collision/Shapes/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Common/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/Contacts/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/Joints/*.cpp)
    LOCAL_SRC_FILES := $(FILE_LIST:$(LOCAL_PATH)/%=%)
    

    Now we can include Box2D in our code:

    #include 
    ...
    b2World* _world;
    

    Testing with the Intel HAXM Emulator

    The Intel HAXM emulator, part of the Beacon Mountain toolset, provides a massive speed increase over the stock Android emulators. This can be crucial for game development, as testing many scenarios becomes impossible at low frame rates.

    Begin by right-clicking the project and choosing Properties. Click the Run/Debug Settings item in the left-nav. To test our project, we'll need to add a launch configuration. So click the New button and select Android Application from the list.

    Under the Android tab, click Browse and select the main project. Then click the Target tab and select the x86 device from the list.

    Click OK. We can now test our project by right-clicking it and selecting Run As->Android Application.

    Summary

    This has been a high-level overview of how the Beacon Mountain toolset can accelerate Android game development. For more information, download the full source code of the sample application or check out the Beacon Mountain home page (http://software.intel.com/en-us/vcsource/tools/beaconmountain).

  • Intel® HAXM
  • Beacon Mountain
  • emulator
  • applications
  • Frame rendering
  • x86
  • ARM
  • Developers
  • Android*
  • Android*
  • Intel Hardware Accelerated Execution Manager (HAXM)
  • OpenCL*
  • Development Tools
  • Game Development
  • User Experience and Design
  • License Agreement: 

    Protected Attachments: 

  • URL
  • 借助 OpenGL* ES 2.0 实现动态分辨率渲染

    $
    0
    0

    下载

    借助 OpenGL* ES 2.0 实现动态分辨率渲染 [PDF 677KB]
    代码样本: dynamic-resolution.zip [ZIP 4MB]

    像素处理成本昂贵

    当在游戏和显卡工作负载上执行性能分析时,似乎处理片段或(像素)着色器是主要的性能瓶颈。 当然这也在情理之中,因为照明计算、纹理采样和后期处理效果等计算均在片段着色器中执行。 计算屏幕上每个像素的最终颜色需要大量处理能力和时间,且可能会非常昂贵。 此外,每次新发布的移动平台均采用更高的分辨率,从而提高了总体成本。 支持高分辨率需要增加片段着色器的调用次数。 但是,高分辨率并非移动开发人员的唯一问题。 如何确定分辨率不同的设备也是一个问题。 在写这篇文章时,我对市场上的一些设备做的快速调查显示分辨率的差异非常大 — 即使是运行相同操作系统的设备。

    • Apple iPhone* 5: 1136 x 640,PowerVR* SGX543MP3
    • Apple iPhone 4S: 960 x 640,PowerVR SGX543MP2
    • Nexus* 4,1280 x 768, Adreno* 320
    • Galaxy Nexus, 1280 x 720, PowerVR SGX540
    • Motorola RAZR* i, 960 x 540, PowerVR SGX540
    • Samsung Galaxy SIII, 1280 x 720, Adreno 225

    不仅要清楚地找到不断提高的分辨率还要找到不同的分辨率是游戏开发人员正在面临或即将面临的问题。 另一个问题是,即使显卡硬件不断改进,它也会不可避免地因为处理更多像素而被消耗掉。

    渲染游戏画面的几种方法

    有许多合理且业经验证的方式来处理游戏中的各种分辨率。 最简单的方式是将场景绘制为原始分辨率。 在某些类型的游戏中,您可能会一直需要用到这种方法。 或者片段着色器可能能力不够强大而成为性能的瓶颈。 如果您遇到这种情况,多半会受到局限,但是仍然需要确保游戏开发资源(art asset)能够在各种重要分辨率中使用。

    第二种方法是确定固定分辨率而非原始分辨率。 这种方法支持您将游戏开发资源(art asset)和着色器调整为固定分辨率。 但是,它可能无法为用户提供最佳的游戏体验。

    另一种常见的方法是让用户在开始时设置所需的分辨率。 这种方法可以使用玩家选择的分辨率生成后台缓冲,并能有效解决分辨率各异的问题。 它支持玩家选择其设备上最适合的分辨率。 此外,您还需要确认游戏开发资源(art asset)是否能够在用户可选的分辨率中使用。

    本文中介绍的第三种方法名为动态分辨率渲染。 这是单机游戏和高级电脑游戏上的一种常用技术。 本文中介绍的实施情况是从 [Binks 2011] 中的 DirectX* 版本中获取,并用于配合 OpenGL* ES 2.0 使用。 借助动态分辨率渲染,后台缓冲是原始分辨率的尺寸,但是场景使用了固定分辨率绘制为屏幕外纹理。 如图 1 所示,画面被绘制为屏幕外纹理的一部分,然后被采样填充后台缓冲。 UI 元素在原始分辨率下绘制。



    图 1. 动态分辨率渲染

    绘制为屏幕外纹理

    第一步是创建屏幕外纹理。 在 OpenGL ES 2.0 中,使用所需的纹理尺寸创建 GL_FRAMEBUFFER。 下列是完成此操作的代码:

    glGenFramebuffers(1, &(_render_target->frame_buffer));
    glGenTextures(1, &(_render_target->texture));
    glGenRenderbuffers(1, &(_render_target->depth_buffer));
    
    _render_target->width = width;
    _render_target->height = height;
    
    glBindTexture(GL_TEXTURE_2D, _render_target->texture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _render_target->width, _render_target->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
    
    glBindRenderbuffer(GL_RENDERBUFFER, _render_target->depth_buffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _render_target->width, _render_target->height);
    
    glBindFramebuffer(GL_FRAMEBUFFER, _render_target->frame_buffer);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _render_target->texture, 0);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _render_target->depth_buffer);
    

    调用 glTexImage2D 可创建我们将要渲染的纹理,调用 glFramebufferTexture2D 可将着色的纹理绑定至帧缓冲。 然后,在帧缓冲绑定后按照下列方式渲染场景:

    // 1. SAVE OUT THE DEFAULT FRAME BUFFER
    static GLint default_frame_buffer = 0;
    glGetIntegerv(GL_FRAMEBUFFER_BINDING, &default_frame_buffer);
    
    // 2. RENDER TO OFFSCREEN RENDER TARGET
    glBindFramebuffer(GL_FRAMEBUFFER, render_target->frame_buffer);
    glViewport(0, 0, render_target->width * resolution_factor, render_target->height * resolution_factor);
    glClearColor(0.25f, 0.25f, 0.25f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    /// DRAW THE SCENE ///
    // 3. RESTORE DEFAULT FRAME BUFFER
    glBindFramebuffer(GL_FRAMEBUFFER, default_frame_buffer);
    glBindTexture(GL_TEXTURE_2D, 0);
    
    // 4. RENDER FULLSCREEN QUAD
    glViewport(0, 0, screen_width, screen_height);
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    

    渲染完场景后,默认的帧缓冲(后台缓冲)将会被再次绑定。 此时,场景绑定至屏幕外着色纹理。 下一步是渲染从屏幕外纹理采集的全屏四边形(full-screen quad)。 下列代码介绍了如何完成该操作:

    glUseProgram(fs_quad_shader_program);
    glEnableVertexAttribArray( fs_quad_position_attrib );
    glEnableVertexAttribArray( fs_quad_texture_attrib );
    
    glBindBuffer(GL_ARRAY_BUFFER, fs_quad_model->positions_buffer);
    glVertexAttribPointer(fs_quad_position_attrib, 3, GL_FLOAT, GL_FALSE, 0, 0);
    
    glBindBuffer(GL_ARRAY_BUFFER, fs_quad_model->texcoords_buffer);
    
    const float2 texcoords_array[] =
    {
        { resolution_factor, resolution_factor },
        { 0.0f,              resolution_factor },
        { 0.0f,              0.0f              },
        { resolution_factor, 0.0f              },
    };
    
    glBufferData(GL_ARRAY_BUFFER, sizeof(float3) * fs_quad_model->num_vertices, texcoords_array, GL_STATIC_DRAW);
    glVertexAttribPointer(fs_quad_texture_attrib, 2, GL_FLOAT, GL_FALSE, 0, 0);
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, fs_quad_model->index_buffer );
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, render_target->texture);
    glUniform1i(fs_quad_texture_location, 0);
    glDrawElements(GL_TRIANGLES, fs_quad_model->num_indices, GL_UNSIGNED_INT, 0);
    

    分辨率因素

    上述展示的大部分代码片段是 OpenGL 设置代码。 比较重要的是使用变量 resolution_factor 的内容。 resolution_factor 的值决定了用于绘制和采样的屏幕外纹理的宽、高比例。 设置用于绘制的屏幕外纹理非常简单,可以通过调用 glViewpor 完成。

    // 1. SAVE OUT THE DEFAULT FRAME BUFFER
    
    // 2. RENDER TO OFFSCREEN RENDER TARGET
    glBindFramebuffer(GL_FRAMEBUFFER, render_target->frame_buffer);
    glViewport(0, 0, render_target->width * resolution_factor, render_target->height * resolution_factor);
    
    /// DRAW THE SCENE ///
    
    // 3. RESTORE DEFAULT FRAME BUFFER
    
    // 4. RENDER FULLSCREEN QUAD
    glViewport(0, 0, screen_width, screen_height);
    

    绑定帧缓冲后,调用 glViewport 可根据宽和高设置要绘制的区域。 然后,这将重置为原始分辨率来绘制全屏四边形(full-screen quad)和用户界面。 如只采集更新的屏幕外纹理,请为全屏四边形(full-screen quad)的顶点设置纹理坐标。 下列代码可完成该操作:

    glBindBuffer(GL_ARRAY_BUFFER, fs_quad_model->texcoords_buffer);
    
    const float2 texcoords_array[] =
    {
        { resolution_factor, resolution_factor },
        { 0.0f,              resolution_factor },
        { 0.0f,              0.0f              },
        { resolution_factor, 0.0f              },
    };
    
    glBufferData(GL_ARRAY_BUFFER, sizeof(float3) * fs_quad_model->num_vertices, texcoords_array, GL_STATIC_DRAW);
    

    使用动态分辨率的优势

    设置完成后,场景将会保存为屏幕外纹理并渲染至全屏四边形(full-screen quad)的屏幕。 渲染场景的实际分辨率将不再绑定在原始分辨率。 针对该场景处理的像素数量可以进行动态更改。 根据游戏的类型和风格,可在不退化图像的前提下大量降低分辨率。 下列是几种不同分辨率的样本示例:

    在这种情况下,可先将分辨率降低至 75% 和 50% 之间,这一分辨率不会导致图像质量明显恶化。 将要显示的主要图形边缘混叠。 对于这种情况,使用 75% 的原始分辨率也可行,但是具体取决于您的游戏,您也可以为了达到有趣的艺术风格而采用 25% 的分辨率。

    很显然,动态分辨率渲染提供了一种明确的方式来降低要处理的像素数量。 而且,它还支持在每个像素上处理更多任务。 因为您不再以全分辨率进行渲染,片段着色器调用已降低,以便能够在每次执行时处理更多任务。 如要确保样本代码清晰、可读,片段着色器是合适的选择,因为它非常简单。 作为开发人员,实现性能和图像质量的平衡是最具挑战性的任务之一。

    我们的实施



    图 2. 实施详图

    可下载的实施仅包括 Android* 项目,且仅以图 2 中呈现的格式布局以便能够轻松扩展至其他移动操作系统。 项目核心以 C 语言编写,以 OpenGL ES 2.0 为目标且需要 Android NDK。 值得注意的是, C 非常适合跨平台开发。 系统抽象是指文件 I/O 和其他不受 OS 影响的功能。

    结论

    动态分辨率渲染非常适合解决多种与移动设备屏幕分辨率有关的问题。 借助它,开发人员和用户能够更好地控制性能和图像质量之间的比率。 调整该比率还应考虑到实施动态分辨率渲染的费用。 创建渲染目标和更改每帧的渲染目标将会增加每帧的处理时间。 了解并计算该费用可以帮助您确定这一技术是否适合您的游戏。

    参考

    [Binks 2011] Binks, Doug. “动态分辨率渲染文章”。 http://software.intel.com/en-us/articles/dynamic-resolution-rendering-article

    英特尔公司 © 2013 年版权所有。 所有权利受到保护。

    *其他的名称和品牌可能是其他所有者的资产。

    OpenGL 是注册商标,OpenGL ES 标识是 Khronos 授权 Silicon Graphics Inc. 使用的商标。

  • Android Code Sample
  • Developers
  • Android*
  • Android*
  • OpenGL*
  • Game Development
  • Graphics
  • Phone
  • URL
  • 面向 Android* 操作系统的英特尔® 图形性能分析器

    $
    0
    0

    介绍

    英特尔® 图形性能分析器(英特尔® GPA)套件是一套强大的图形和游戏分析工具,能够按照游戏开发人员的工作方式执行,通过快速提供可操作的数据帮助开发人员从系统层面下至每个绘制调用找到开发性能的机会,以节约宝贵的优化时间。

    目前,英特尔® GPA 支持基于英特尔® 凌动? 处理器且运行 Google* Android* OS 的手机和平板电脑。 该版工具套件支持您使用您选择的开发系统(Windows*、OS X* 或 Ubuntu* OS)优化 OpenGL* ES 工作负载。 借助该功能,Android* 开发人员可以:

    • 获取包括 CPU、GPU 和 OpenGL* ES API 在内的二十多种关键系统标准的实时视图
    • 进行若干图形管线实验以隔离图形瓶颈
    • 当使用基于英特尔凌动处理器的平板电脑时,运行英特尔 GPA 帧分析器来执行具体的帧分析和优化
    • 当使用基于采用 PowerVR* 显卡的英特尔凌动处理器的 Android* 设备时,运行英特尔 GPA 平台分析器来执行具体的平台分析

    如要下载免费版的英特尔 GPA,请访问 英特尔 GPA 主页,并点击 Download按钮,下载相应的产品版本。 关于为 Android* 操作系统平台开发游戏或应用,请根据您的开发系统选择英特尔 GPA 版本。

    后续步骤

    关于在 Android* OS 上启动英特尔 GPA 的具体信息,请参阅产品的在线帮助此外,英特尔 GPA 主页还包括产品信息(包括在 Windows* 操作平台上分析 DirectX* 游戏的有关信息)链接。

    如果您希望系统通知您英特尔 GPA 产品更新,请点击该链接

    我们一如既往地欢迎您提出建议,因此请在英特尔 GPA 支持论坛上发表您的评论,告诉我们能够做什么来改进您使用上述工具的体验。

    *其他的名称和品牌可能是其他所有者的资产。

  • vcsource_type_techarticle
  • vcsource_product_gpa
  • vcsource_domain_gamedev
  • vcsource_index
  • Developers
  • Android*
  • Android*
  • Intel® Graphics Performance Analyzers
  • Game Development
  • Phone
  • URL
  • 在英特尔® 凌动™ 处理器上将 OpenGL* 游戏移植到 Android* (第二部分)

    $
    0
    0

    本文是两部分中的第二部分,讨论了将 OpenGL 游戏移植到 Google Android 平台存在的障碍。 在开始游戏植入项目(包括 OpenGL 扩展的差别、浮点支持、纹理压缩格式和 GLU 库)之前,您应该认识到这些障碍。 此外,还介绍了借助 OpenGL ES 如何为英特尔凌动处理器设置 Android 的开发系统,以及如何获得 Android 虚拟设备模拟工具的最佳性能。

    本文的第一部分介绍了如何通过软件开发套件(SDK)或 Android 原生开发套件(NDK)在 Android 平台上使用 OpenGL ES,以及如何确定选择何种方法。 本文还介绍了各种 SDK 和 NDK 中的 OpenGL ES 示例应用,以及 Java* 原生接口,这支持您结合使用 Java 和 C/C++ 组件。 此外,还讨论了如何确定应使用 OpenGL ES 版本 1.1 还是 2.0。

    移植台式机 OpenGL 游戏的障碍

    OpenGL ES 1.1 应用编程接口(API)是 OpenGL 1.x API 的子集,数十年来一直用于台式机 Linux* 和 Windows* 系统。 同样,OpenGL ES 2.0 API 是台式系统 OpenGL 2.0 API 的子集。 因此,所有的游戏移植项目都应先评估游戏代码使用的 OpenGL 特性集,以确定代码的使用年限并找到需要重新写入的部分,因为使用的 OpenGL 版本与 Android 上要使用的 OpenGL ES 版本不同。

    OpenGL 2.0 与 OpenGL ES 2.0 之间的主要区别分为三大类: 几何规范、几何变换以及 OpenGL 着色语言(GLSL)中的变化。 OpenGL 2.0 可提供四种方式来描述几何: 即时模式、显示列表、顶点阵列和顶点缓冲区对象。 OpenGL ES 2.0 不支持即时模式(glBeginglEnd块 )和显示列表,因此,您必须使用顶点阵列或顶点缓冲区对象来描述几何。

    OpenGL 2.0 具备加载模式视图、投影和纹理矩阵的函数,以及联合这些标准的便捷函数,如 glTranslateglRotateglScale。 OpenGL ES 2.0 不具备上述函数,也不具备投影相关函数,因为固定函数管道被着色器编程模块替换。 对于 OpenGL ES 2.0,应在顶点着色器中执行标准和投影计算。

    此外,面向 OpenGL ES 2.0 (GLSL ES)的着色器语言也是台式机 OpenGL 2.0 (GLSL) 着色器语言的子集。 主要区别是,GLSL 具备内置变量来访问顶点着色器固定函数管道中的声明(如 gl_ModelViewMatrix)。 因为 OpenGL ES 2.0 不支持固定函数管道,所以 GLSL ES 中不支持这些变量。

    OpenGL ES 2.0 中不具备下列 OpenGL 2.0 特性:

    • 即时模式几何规范
    • 显示列表几何规范
    • 所有固定函数管道处理
    • 固定变化和投影
    • 矩阵堆栈和矩阵变换
    • 用户裁剪面
    • 线条和多面体柔和化和点化(stipple)
    • 像素矩形和位图
    • 一维纹理和一些纹理封装模式
    • 遮面查询
    • 烟雾

    注: 关于 OpenGL 2.0 和 OpenGL ES 2.0 之间区别的更多信息,请参阅 http://software.intel.com/en-us/articles/targeting-3d-applications-for-mobile-devices-powered-by-opengl-and-opengl-es

    从其他的嵌入式系统移植

    一般而言,使用 OpenGL 从其他嵌入式系统向 Android 移植 OpenGL ES 游戏比从台式机系统移植更简单,因为大部分的嵌入式系统都与 Android 使用相同的 OpenGL ES 1.1 或 2.0 规范。 当从其他嵌入式系统移植 OpenGL ES 代码时,一般而言,最大的区别在于支持的 OpenGL ES 扩展 — 尤其与压缩纹理格式相关时。 另一个值得注意的地方是嵌入式系统间浮点支持的区别。

    浮点支持

    名为 CommonCommon-Lite的两个版本提供了 OpenGL ES 1.0 和 1.1 的早期实施。 Common 版本适用于有浮点硬件的处理器;Common-Lite 版本适用于没有浮点硬件的处理器,因此必须使用固定点计算。 因为 Android 需要包含浮点支持的处理器,如英特尔凌动处理器,所以无需使用固定点,因此 Google 为 Android 提供的系统映像仅包括面向 OpenGL ES 1.1 的 Common(浮点) 驱动程序。 面向 OpenGL ES 2.0 的 Khronos 标准仅可用于浮点。 如果您正在从其他使用 Common-Lite 版 OpenGL ES 的嵌入式系统移植 OpenGL ES 1.0 或 1.1 游戏时,这非常重要,因为您必须将代码中的所有固定点计算转换为浮点(因为 Android 仅支持 Common 版 OpenGL ES)。 一般而言,嵌入式 Linux 系统上的 Common 和 Common-Lite 驱动程序分别以类似 libGLESv1_CM.solibGLESv1_CL.so的方式命名。

    如果您正在从台式机应用移植 OpenGL 代码,则可能已经使用了浮点代码。 但是,请注意代码中任何双精度浮点的使用。 Android 支持双精度浮点,但是 OpenGL ES 仅支持单精度浮点。 因此,必须先将所有双精度值转换为单精度,然后再传递到 OpenGL ES 1.1 或 2.0。

    OpenGL 扩展

    最大的障碍之一是,跨平台移植 OpenGL 代码来自于应用代码对 OpenGL 扩展和嵌入式系统图形库(EGL)的使用。 您应在移植流程的早期评估传统代码中使用了哪些扩展,并将其与目标 Android 设备上实际使用的扩展进行比较。 扩展是 GPU 开发人员暴露其 GPU 在标准 OpenGL 规则之外的特定特性的方式。 传统代码中使用的不适用于目标平台的所有扩展都将要求使用新代码,以消除对扩展的需求或使用不同的扩展。

    支持的扩展在不同版本的 OpenGL 中差别很大,而且由于 GPU 架构的区别,即使在不同的 Android 平台上,支持的扩展也不相同。 英特尔凌动处理器采用 PowerVR* GPU,这是当今移动设备上使用最普遍的 GPU 架构。 Android 应用不应假定 OpenGL ES 的任何扩展适用于任何设备。 功能良好的应用可在运行时通过查询 OpenGL ES 获得可用扩展列表。 您可以使用下列调用从 OpenGL ES 和 EGL 驱动程序中获得可用扩展列表:

    • glGetString(GL_EXTENSIONS); 
    • eglQueryString(eglGetCurrentDisplay(), EGL_EXTENSIONS);

    调用扩展所需的所有函数和枚举名称均在 glext.hgl2ext.heglext.h头文件中定义,因此,您的应用在运行时确认某个扩展可用后,您便可直接调用它。

    Google Play 网站提供了一个有用的应用 — OpenGL Extensions Viewer (GLview),可为运行它的任何 Android 设备执行该查询和显示返回信息。 您可以使用类似 GLview 的工具亲自对多种 Android 设备进行测试,以确认可用扩展。 此外,Khronos 可为所有已知 OpenGL ES 扩展维护 OpenGL ES API 注册表 — 重要参考。

    纹理压缩格式

    OpenGL ES 最重要的扩展可提供对压缩纹理的支持。 这是大部分 3D 游戏用来降低内存要求和提升性能的一项重要技术,但是配合 OpenGL ES 使用的各种格式仅可通过扩展来定义,因此在每个平台上各有不同。 遗憾的是,所有 Android 设备上仅支持爱立信纹理压缩(ETC1)一种格式(除第一代设备以外,第一代设备不支持 OpenGL ES 2.0)。 ETC1 仅支持每像素 8 位的精度,且不支持 alpha。 因为大部分的游戏都使用带有 alpha 的压缩纹理,一般而言,这对移植是一个严重的障碍。 不支持 alpha 的几种 Android 平台上可使用几种专用格式,但是使用这些格式,您的游戏将限制在采用该特定 GPU 架构的 Android 设备上。 表 1 对它们为 Android 提供的各种 GPU 架构和纹理压缩格式进行了总结。

    表 1. 包含 Alpha 支持的专用压缩纹理格式

    GPU 架构纹理格式
    PowerVRPVRTC
    ATI/AMDATITC/3DC
    NVIDIA Tegra* 2/3DXT/S3TC

    英特尔凌动处理器上的 PowerVR GPU 可支持 PVRTC 格式和 ETC1。 PVRTC 支持 alpha 和每像素 2 位或 4 位精度,这可大幅降低 ETC1 格式的纹理尺寸。 PVRTC 是继 ETC1 之后支持最广泛的格式,它也可以在各代 Apple iPhone*、iPod touch* 和 iPad* 设备中使用,因此它可在英特尔凌动处理器上提供基本 OpenGL ES 标准以外的 Android 兼容性,这在移植游戏时对 Android 和 Apple iOS* 平台都很重要。 但是仍然无法处理使用不同 GPU 架构的 Android 设备。 如果您的游戏使用了任何专用纹理格式,它必须在运行时查询 OpenGL ES 扩展以确认哪些格式实际可用,并仅使用这些格式压缩的纹理。 这让设计决策变得困难。 您可以选择:

    • 不支持拥有不同 GPU 架构的 Android 设备
    • 提供使用三种专用格式压缩的所有纹理
    • 不使用针对 alpha 纹理的压缩
    • 将颜色和 alpha 成分分离为成对的 ETC1 文件,并在片段着色器中将其重新结合

    在确定您的游戏需要哪些专用格式后,确保在清单中将其声明,详见下文。 关于更多信息,请参阅纹理压缩支持

    GLU 库

    一般情况下,台式机版本的 OpenGL 可提供一个名为 OpenGL Utility Library的便捷函数库。 GLU 包括严格意义上不属于 OpenGL 或不是使用 OpenGL 必需的函数,但是它们在为 OpenGL 编写 3D 应用时非常有帮助。 标准的 GLU 库包括构造模型视图和投影矩阵、执行一般的矩阵计算、计算二次曲面、细分多边形、生成位图和报告错误消息的函数。 台式机 OpenGL 上的大多数游戏使用一些 GLU 函数,这可能会带来麻烦,因为 Android 仅可提供最低的 GLU 实施(仅可从 SDK 中获得)。 该类如表 2 所示。

    表 2.提供 GLU 功能的 Android 类总结

    功能
    android.opengl.GLU为 OpenGL ES 创建投影矩阵
    android.graphics.Matrix创建模型视图和通用矩阵

    OpenGL ES 1.1 和 2.0 可提供生成位图的支持,因此不具备计算二次曲面的功能和细分多边形的支持。 如果您需要这些功能,还存在其他的开源 GLU 执行,它们更完整且更适合配合 Android NDK 使用,如 GLU ES

    面向 OpenGL ES 的清单声明

    需要 OpenGL ES 的应用应在其清单文件中声明该事实。 这将会阻止您的应用在不支持所需 OpenGL ES 版本的设备上安装。 下列示例展示了 AndroidManifest.xml 文件中所需清单参数的正确语法。

    OpenGL ES 1.1:

    • <uses-feature android:glEsVersion="0x00010001" android:required="true" />
    • <uses-sdk android:minSdkVersion="4"/>

    OpenGL ES 2.0:

    • <uses-feature android:glEsVersion="0x00020000" android:required="true" />
    • <uses-sdk android:minSdkVersion="8"/>

    Android 4.0 (API 级别 14) 的主要变化是,在所有应用上默认启用 OpenGL ES 硬件加速,可为 14 或更高级别的 minSdkVersion参数发表声明。 API 级别更低的应用可以通过将 android:hardwareAccelerated=“true”添加至 <application>标签来启用加速。 如果您的应用使用了需要 OpenGL ES 硬件加速的 Android 4.0 中的新特性,如 TextureView类,您必须在清单中启用它,否则这些特性不会运行。

    如果您的应用使用了 NativeActivity类,其清单必须声明它,并指定包括本地 Activity 的共享对象库名称。 此外,minSdkVersion参数必须为 9 或更高版本。 请参考 NDK 中的 native-activity示例应用。

    如果您的应用使用了压缩纹理,请确保按照如下方式声明它们。 这不会阻止您的应用在没有这些特性的设备上安装,因此,您仍应在运行时查询设备以获得所需专用格式的扩展。 外部设备(如 Google Play)可使用这些清单参数过滤不能在无法支持所需纹理格式的设备上运行的应用。

    压缩纹理格式所需的清单参数:

    • <supports-gl-texture android:name="GL_OES_compressed_ETC1_RGB8_texture" />
    • <supports-gl-texture android:name="GL_OES_compressed_paletted_texture" />
    • <supports-gl-texture android:name="GL_IMG_texture_compression_pvrtc" />

    为英特尔凌动处理器设置 Android 开发系统

    当您设置 Android 开发系统时,请确保选择包含英特尔凌动处理器内置支持的 Android 版本。 这可帮助您找到英特尔凌动设备,并使用英特尔凌动系统映像 Android 虚拟设备(AVD)。 表 3 总结了目前所有的主要 Android 版本以及哪些版本包括面向英特尔凌动处理器内置的系统映像。

    表 3. 包含 OpenGL ES 和英特尔® 凌动? 处理器支持的 Android 版本总结

    版本名称API支持英特尔凌动支持
    Android 1.5Cupcake3OpenGL ES 1.0
    Android 1.6Donut4OpenGL ES 1.0, 1.1
    Android 2.0Éclair5OpenGL ES 1.0, 1.1
    Android 2.1Éclair7OpenGL ES 1.0, 1.1
    Android 2.2Froyo8OpenGL ES 1.0, 1.1, 2.0
    Android 2.3.3Gingerbread10OpenGL ES 1.0, 1.1, 2.0支持
    Android 3.0Honeycomb11OpenGL ES 1.0, 1.1, 2.0
    Android 3.1Honeycomb12OpenGL ES 1.0, 1.1, 2.0
    Android 3.2Honeycomb13OpenGL ES 1.0, 1.1, 2.0
    Android 4.0Ice Cream Sandwich14OpenGL ES 1.0, 1.1, 2.0
    Android 4.0.3Ice Cream Sandwich15OpenGL ES 1.0, 1.1, 2.0支持
    Android 4.1Jelly Bean16OpenGL ES 1.0, 1.1, 2.0支持

    Android 版本 2.3.3 (Gingerbread)、4.0.3 (Ice Cream Sandwich) 和 4.1 (Jelly Bean)包括对英特尔凌动处理器的全面支持,包括面向 OpenGL ES 1.0、 1.1 和 2.0 的驱动程序。 仅从上述 Android SDK 管理器版本中选择一个版本并确保其依照原样列出英特尔 x86 凌动系统映像。 此外,还需确保下载和安装英特尔® 硬件加速执行管理器(HAXM) — 位于 Extras下的 SDK 管理器中。 如果您尚未安装 Android SDK 管理器,可以从 http://developer.android.com/sdk/index.html进行下载。

    SDK 管理器可下载 HAXM 安装程序并将其存入 Extras 文件夹,但是如要完成该安装,您必须手动运行 HAXM 安装程序。 此外,您还须在开发器的 ROM BIOS 设置菜单中启用虚拟技术特性。 HAXM 安装程序保存在 …\android-sdk\extras\intel\Hardware_Accelerated_Execution_Manager上。 HAXM 正确安装后,当启动 x86 AVD 时将会显示下列消息: “HAX is working and emulator runs in fast virt mode”。

    使用包含 AVD 模拟的 x86 系统映像和 HAXM

    每个嵌入式开发人员都知道虚拟设备管理器对新应用开发的加速效果。 但是在 ARM 系统映像上使用 AVD 非常慢,因为 AVD 必须模拟 Windows 或 Linux 开发系统上的每个 ARM 指令。 仅在普通的 ARM AVD 上启动 Android 就需要 5 分钟或更长的时间。 借助 HAXM,英特尔解决了该问题,它可使用全新英特尔台式机处理器中内置的虚拟技术特性直接运行 Android x86 系统映像。 当安装该工具并配合英特尔凌动 x86 系统映像 AVD 使用时,您的应用开发将会获得显著加速,甚至都不需要启用英特尔凌动硬件。

    Google 于 2012 年 4 月在 AVD 模拟中添加了对 OpenGL ES 2.0 的支持(从 SDK 工具版本 17 开始)。它通过将 OpenGL ES 2.0 调用转化至主机操作系统上可用的 OpenGL 2.0 API 来操作,这使得 3D 图形的运行速度更快。 但是,您必须在创建 AVD 时专门启用该特性;否则,对 OpenGL ES 2.0 的调用将失败。 当然,您的 Windows 或 Linux 主机系统必须针对 OpenGL 2.0(或更高版本)安装驱动程序,通常这还需要独立显卡。 OpenGL ES 2.0 模拟可在 ARM 和 x86 两种系统映像上运行,但是为了获得最佳性能,请将其用在启用了 HAXM 的 x86 系统映像上。 性能方面的差别非常显著。

    如要启用 OpenGL ES 2.0 模拟,请在 AVD 管理器中选择一个 AVD 并点击 Edit。 然后,在 Hardware Properties窗口中点击 New;滚动浏览属性列表,查找 GPU emulation,然后点击 OK。 最后,将该属性的 no更改为 yes并点击 Edit AVDOK来保存该变更。 然后将会出现一条消息,显示 hw.gpu.enabled=yes

    对于 SDK 工具版本 17 以前的版本, AVD 模拟仅支持 OpenGL ES 1.0 和 1.1。 严格上讲,版本 1.1 并不要求启用 GPU 模拟,因为它不需要主机系统提供帮助就可以进行模拟,如果启用它将会变得更慢。 但是,没有主机的帮助 SDK 工具无法模拟版本 2.0,因此如果其不可用,当 AVD 尝试初始化 OpenGL ES 2.0 时会直接关闭您的应用。

    使用 HelloEffects示例应用来初次测试 OpenGL ES 2.0 模拟是否起作用。 GLES20TriangleRenderer示例在 AVD 上运行时实际转换为版本 1.1, 因此该测试不成功。 Android SDK 和 NDK 中提供的所有 OpenGL ES 2.0 样本均与原始示例有关,且不应被用于 OpenGL 2.0 模拟测试。

    英特尔® 图形性能分析器

    在英特尔凌动处理器上以 Android 为目标的 OpenGL ES 应用开发人员可使用的另一重要工具套件是英特尔® 图形性能分析器(英特尔® GPA)。 该工具套件可提供几十个关键系统标准(包括 CPU、GPU 和 OpenGL ES)的实时视图。 英特尔 GPA 可在 Windows 或 Ubuntu Linux 开发系统上运行,并通过 Android 调试接口与 Android 目标设备上运行的驱动程序组件通信。 通过运行几种实验,您可以快速看到图形管线中的问题并找到优化代码的最佳机遇。

    如欲了解更多信息以及下载面向 Android 开发的最新版英特尔 GPA (2012 R4),请访问 http://software.intel.com/en-us/vcsource/tools/intel-gpa?cid=sem121p7972

    结论

    Android 对 OpenGL ES 和 C/C++ 的支持降低了移植游戏和其他在 Android 平台设备中大量使用 3D 图形的应用的障碍。 尽管如此,还是存在一些障碍,您须在开始游戏移植项目之前先了解它们。 最大的障碍是 OpenGL 扩展中的差别和 Android 上支持的纹理压缩格式。 好的一面是,近年来 Google 和英特尔在改进面向 OpenGL ES 2.0 开发的工具上取得了巨大的进步,如今有大量的游戏、游戏引擎和传统游戏软件可在 OpenGL 标准下使用,这对于已经开始开发它的软件开发人员而言是一个巨大的机遇。

    关于作者

    Clay D. Montgomery 是在嵌入式系统上开发面向 OpenGL 的驱动程序和应用的主要开发人员。 他曾在 STB 系统、VLSI 技术、飞利浦半导体、诺基亚、德州仪器、AMX 以及作为独立顾问从事跨平台图形加速器硬件、图形驱动程序、APIs 和 OpenGL 应用的设计。 他曾参与 Freescale i.MX 和 TI OMAP* 平台以及 Vivante、AMD 和 PowerVR 首个 OpenGL ES、OpenVG* 和 SVG 驱动程序和应用的开发。 他开发了在嵌入式 Linux 上开发 OpenGL ES 并开设了研讨班进行教授,且是 Khronos Group 多家公司的代表。

    更多相关信息

  • Developers
  • Android*
  • Intel® Graphics Performance Analyzers
  • OpenGL*
  • Game Development
  • Intel® Atom™ Processors
  • Phone
  • URL
  • 在英特尔® 凌动™ 处理器上将 OpenGL* 游戏移植到 Android* (第一部分)

    $
    0
    0

    将游戏和其他使用大量 3D 图形的应用从 OpenGL 标准移植到 Google Android 设备(包括构建在英特尔® 凌动™ 微架构上的设备)存在巨大的机遇,因为基于 OpenGL 的游戏、游戏引擎和其他传统软件易于获得;OpenGL 便于移植;而且 Android 可提供对 OpenGL ES 和 C/C++ 的支持。 甚至,许多基于 OpenGL 的游戏和引擎的可用性与开源软件一样,如 Id Software 的 Quake系列。 本文包括两部分的内容,通过详述在英特尔凌动处理器上将早期版本的 OpenGL 所构建的应用的渲染组件移植到 Android 中存在的障碍来介绍如何开始这样的项目。 这些应用可以是游戏、游戏引擎或使用 OpenGL 构建 3D 场景或图形用户界面(GUI)的任何一种软件。 此外,还包括从台式机操作系统(如 Windows* 和 Linux* )以及使用嵌入式版本的 OpenGL ES(无论包括或不包括 windowing 系统)的应用移植 OpenGL 代码。

    本文的第一部分介绍了如何通过软件开发套件(SDK)或 Android 原生开发工具套件(NDK)在 Android 平台上使用 OpenGL ES,以及如何确定选择何种方法。 本部分还介绍了各种 SDK 和 NDK 中的 OpenGL ES 示例应用,以及 Java* 原生接口(JNI),这支持您结合使用 Java 和 C/C++ 组件。 此外,还讨论了如何确定使用 OpenGL ES 版本 1.1 还是 2.0。

    第二部分将讨论您在开始这样的项目之前,应该了解的移植 OpenGL 游戏存在的障碍,包括 OpenGL 扩展、浮点支持、纹理压缩格式和 GLU 库的区别。 此外,还介绍了借助 OpenGL ES 如何为英特尔凌动处理器设置 Android 的开发系统,以及如何获得 Android 虚拟设备模拟工具的最佳性能。

    Android 上的图形

    您可以通过四种不同的方式在 Android 平台上渲染图形,每种方式都有其长处和不足之处(见表 1)。 本文并未对四种方式都进行介绍。 只有两种方式适用于从其他平台移植 OpenGL 代码: 面向 OpenGL ES 的 SDK 包装程序类,以及可用于在 C/C++ 中开发 OpenGL ES 的 NDK。 关于其他两种方法,SDK Canvas 应用编程接口(API)是一款强大的 2D API,可支持您结合 OpenGL ES 使用,但是仅限于 2D 且需要使用新的代码。

    Android 的 Renderscript API 最初并不支持 OpenGL ES,已在 API 等级 16 (Jelly Bean) 中被弃用,因而不能在新的项目中使用。 现在,最适合 Renderscript 的是可以提升计算密集型算法的性能且不需要分配大量内存或传输大量数据的应用,如仿真游戏物理引擎的计算。

    表 1.在 Android 上渲染图形的四种方式

    方法规定
    SDK Canvas API仅支持 Java 和 2D 图形
    面向 OpenGL ES 的 SDK 包装程序类可从 Java 调用 OpenGL ES 1.1 和 2.0(但是有 JNI 开销)
    NDK OpenGL ESOpenGL ES 1.1 和 2.0(包括从 Java 中调用原生 C/C++)
    面向 OpenGL ES 的 RenderscriptOpenGL ES 支持已在 API 等级 16 中弃用

    将 OpenGL 应用移植到早期版本的 Android 较为困难,因为大多数传统的 OpenGL 代码是使用 C 或 C++ 进行编写的,而且 Android 仅支持 NDK 在 Android 1.5 中发布之前的 NDK(Cupcake)。 OpenGL ES 1.0 和 1.1 从开始便可提供支持,但是性能不一致,因为加速为可选操作。 但是,近年来 Android 取得了重大的进步。 向 Android 2.2 (Froyo) 中的 SDK 和修订版 3 中的 NDK 添加了 OpenGL ES 2.0 支持,在 NDK 修订版 7 中的 OpenGL ES 扩展添加了扩展支持。 现在,在所有新的 Android 设备上,加速的 OpenGL ES 1.1 和 2.0 是必备配置 — 尤其随着屏幕尺寸不断增大。 今天,Android 可为 Java 或 C/C++ 中 OpenGL ES 1.1 或 2.0 上构建的 3D 密集型应用提供一致、可靠的性能,且开发人员可选择多种方式让植入流程更轻松。

    使用采用 OpenGL ES 包装程序类的 Android 框架 SDK

    Android SDK 框架可为 Android 支持的三个版本的 OpenGL ES (1.0、1.1 和 2.0) 提供一套包装程序类。 这些分类支持 Java 代码在 Android 系统中轻松调用 OpenGL ES 驱动程序 — 即使驱动程序在本地执行。 如果您正在从新开始创建一个新的 OpenGL ES Android 游戏,或愿意将传统的 C/C++ 代码转换为 Java,则这可能是最简单的方法。 虽然 Java 的设计具备便携性,但是移植 Java 应用却较为困难,因为 Android 不能支持全套的现有 Java 平台、标准版(Java SE)或 Java 平台、微型版 (Java ME)分类、库或 API。 虽然 Android 的 Canvas API 支持 2D API,但是它仅可在 Android 上使用且无法与传统代码兼容。

    Android SDK 提供的多种其他分类让使用 OpenGL ES 更加轻松,如 GLSurfaceViewTextureViewGLSurfaceView与配合 Canvas API 使用的 SurfaceView 类相类似,但是它还具备其他一些特性 — 尤其对 OpenGL ES。 它可处理所需的嵌入式系统图形库(EGL)的初始化,并可分配渲染接口以便 Android 在屏幕的固定位置显示并进行渲染。 此外,它还具备一些追踪和调试 OpenGL ES 调用的有用特性。 通过执行面向 GLSurfaceView.Renderer()接口的三种方法,您可以快速创建一个新的 OpenGL ES 应用,如表 2 所示。

    表 2. 适用于 GLSurfaceView.Renderer 的基本方法

    方法描述
    onSurfaceCreated()在应用开始初始化时调用一次
    onSurfaceChanged()当 GLSurfaceView 的尺寸或方向发生变化时进行调用
    onDrawFrame()重复调用以渲染每帧图形场景

    如果从 Android 4.0 开始,您可以使用 TextureView类,不要使用 GLSurfaceView,以便为具备额外功能的 OpenGL ES 提供渲染接口,但是这需要使用更多代码。 TextureView接口的运行方式与普通 Views 相同,并可用于渲染至屏幕外接口。 当将 TextureViews合成至屏幕时,借助该功能,您可以使其迁移、转化、实现动画效果或混合。 此外,您也可以使用 TextureView以结合使用 OpenGL ES 渲染和 Canvas API。

    The 借助位图GLUtils类,使用 Android Canvas API 为 OpenGL ES 创建纹理,或从 PNG、JPG 或 GIF 文件加载纹理将更轻松。 位图可用于为 Android 的 Canvas API 分配渲染接口。 GLUtils类可将影响从位图转换为 OpenGL ES 纹理。 该集成支持您使用 Canvas API 渲染 2D 映像,然后将其用作配合 OpenGL ES 使用的纹理。 这对于创建 OpenGL ES 未提供的图形元素尤其有用 如 GUI widget 和文本字体。 但是当然,需要使用新的 Java 代码来利用这些特性。

    位图类主要是配合 Canvas API 使用,当将它用于为 OpenGL ES 加载纹理时,有一些严重的限制。 Canvas API 遵循适用于 alpha 值混合处理的 Porter-Duff 规格,位图通过以 premultiplied 格式(A、R*A、G*A、B*A)进行存储来优化每个像素 alpha 值的映像。 这适合 Porter-Duff 但不适合 OpenGL,后者需要使用非 premultiplied(ARGB)格式。 这意味着位图类仅可配合完全不透明(或没有每个像素的 alpha 值)的纹理使用。 一般而言,三维游戏需要使用带有每个像素 alpha 值的纹理,在这种情况下,您必须避免使用位图,而从字节阵列或通过 NDK 来加载纹理。

    另一个问题是位图仅支持从 PNG、JPG 或 GIF 格式加载映像,但是一般情况下,OpenGL 游戏使用由 GPU 解码的压缩纹理格式,且通常仅专用于 GPU 架构,如 ETC1 和 PVRTC。 位图GLUtils不支持任何专用的压缩纹理格式或 mipmapping。 因为这些纹理被大部分的 3D 游戏频繁使用,这为使用 SDK 将传统的 OpenGL 游戏移植到 Android 带来严重的障碍。 直到 Google 解决了这些问题,最好的解决方法是避免使用位图GLUtils类来加载纹理。 本文中将进一步讨论纹理格式,"纹理压缩格式。”

    Android ApiDemos 包含一个名为 StaticTriangleRenderer的示例应用,可证明如何使用面向 OpenGL ES 1.0 的 GLES10封装、GLSurfaceView位图GLUtils类从 PNG 资源加载不透明纹理。 名为 GLES20TriangleRenderer的类似版本使用面向 OpenGL ES 2.0 的 GLES20封装类。 如果您正在使用封装类处理 Android 框架 SDK,这些示例应用可为开发 OpenGL ES 游戏奠定良好的基础。 请勿使用名为 TriangleRenderer的原始版本,因为它为名为 javax.microedition.khronos.opengles的 Java 使用了面向较旧版本的 OpenGL ES 绑定的封装。 Google 创建了新的绑定,从而能够为专门面向 Android 的 OpenGL ES 提供静态接口。 这些静态绑定可提供更出色的性能,实现更多的 OpenGL ES 特性,并可提供更接近于 OpenGL ES 配合 C/C++ 使用的编程模型 — 这有益于代码的重新使用。

    Android 框架 SDK 可通过 Google 和 Khronos 提供的面向 Java 的多种绑定为 OpenGL ES 提供支持,如表 3 所示。

    表 3. 面向 OpenGL ES 的封装类总结和示例应用

    面向 OpenGL ES 的 Java 绑定描述示例应用
    javax.microedition.khronos.eglKhronos standard definition
    javax.microedition.khronos.openglesKhronos standard definitionTriangleRenderer, Kube, CubeMap
    android.openglAndroid 专用静态接口

    Android 专用静态绑定可提供更出色的性能,如果可行,应使用它而非 Khronos 绑定。 静态绑定可为 Android 上应用开发可用的所有版本的 OpenGL ES 提供相应的封装类。 表 4 对这些类进行了总结。

    表 4. 面向 OpenGL ES 的 Android 封装类总结和示例应用

    API 版本Java 类示例应用
    OpenGL ES 1.0android.opengl.GLES10StaticTriangleRenderer, CompressedTexture
    OpenGL ES 1.0android.opengl.GLES10Ext
    OpenGL ES 1.1android.opengl.GLES11
    OpenGL ES 1.1android.opengl.GLES11Ext
    OpenGL ES 2.0android.opengl.GLES20GLES20TriangleRenderer, BasicGLSurfaceView, HelloEffects

    这些封装类支持传统的 C/C++ 代码中的大部分 OpenGL ES 调用仅通过使用适当 API 版本的封装类为 OpenGL ES 函数和符号名加前缀来转换为 Java。 参阅表 5 中的示例。

    表 5.将 OpenGL ES 调用从 C 编辑为 Java 的示例

    C 语言Java 语言
    glDrawArrays(GL_TRIANGLES, 0, count)GLES11.glDrawArrays(GLES11.GL_TRIANGLES, 0, count)
    glDrawArrays(GL_TRIANGLES, 0, count)GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, count)

    使用上述封装类将 OpenGL 游戏移植至 Android 有三大局限: 需要使用大量 Java 虚拟机和 JNI 的开销以及操作将传统的 C/C++ 代码转换为 Java。 Java 是解释语言,Android 上所有 Java 代码都在 Dalvik 虚拟机上运行,因此比编译的 C/C++ 代码运行地更慢。 因为 OpenGL ES 驱动程序总是在本地运行,所以每次通过这些封装调用 OpenGL ES 函数都会造成 JNI 开销,这限制了游戏图形渲染的性能。 您的应用发出的 OpenGL ES 调用越多,对 JNI 开销造成的而影响越大。 所幸,OpenGL ES 的设计可帮助最大限度地减少一般在性能关键型渲染循环中需要的调用数量。 如果性能很重要,您可以随时选择使用 NDK 将性能关键型代码迁移至 C/C++。 但是当然,如果您的代码开始就是 C/C++,最好先使用 NDK。

    是否需要将 C/C++ 代码转换为 Java 取决于您项目的具体情况。 如果 C/C++ 代码的量相对较小且易于理解,而且您不希望提高性能功耗,那么可以将其转换至 Java。 如果 C/C++ 代码的量较大且不易理解,而且您不希望增加功耗,则可以考虑使用 NDK。

    使用 Android NDK

    Google 于 2009 年 6 月添加了 NDK 以允许应用使用本地运行的 C/C++ 代码,这可比 Java 提供更高的性能。 因为大部分传统的 OpenGL 代码使用 C 或 C++ 编写,NDK 可提供更简单的路径来移植应用,尤其当 C/C++代码的量太大以至于将其全部转换为 Java 不实际的时候。 这是 Google 决定公开释放 NDK 的主要原因,这可让将 OpenGL 游戏移植到 Android 更轻松。 由于这些优势,NDK 成为在 Android 上实现需要较快图形速度的应用的主要方法。

    借助 NDK,您可以将 C/C++ 代码编写至 Linux 共享对象库,这些库静态连接至您的 Android 应用。 库使用 GNU 工具构建。这些工具包含在 Google 提供的 NDK 发行软件包中,您可以使用 Eclipse* 集成开发环境或命令行接口在 Windows、Mac OS* X 或 Linux 开发系统上运行该工具。 该工具链支持三种处理器架构: ARM、英特尔凌动 (x86) 和 MIPS。 虽然 C/C++ 的全部性能可用,但是大部分的 Linux API 不可用。 事实上,直接支持的 API 仅包括 OpenGL ES、OpenMAX* AL、OpenSL ES*、zlib 和 Linux 文件 I/O,Google 称其为稳定 API。 但是,根据需求将会提供有关如何将其他的 Linux 库移植到您的 NDK 项目中的文档。

    NDK 允许您根据应用的情况灵活地在 Java 和 C/C++ 之间对代码分区。 NDK 支持从名为 Java Native Interface的 Java 调用 C/C++ 代码。 但是,JNI 调用将出现大量的开销,因此,使用原生代码对应用分区以便最大程度地减少通过 JNI 的调用量非常重要。 一般而言,大部分的 OpenGL ES 应保持以 C/C++ 编写以便获得最佳性能和易于移植,但是如要使用 GLSurfaceView和其他 SDK 类来管理应用生命周期事件和支持其他游戏函数,可以编写新的 Java 代码。 Android 开始支持面向英特尔凌动处理器的 JNI,以 NDK 修订版 6b 开始。

    NDK 支持 OpenGL ES 1.1 和 2.0 并可为两个版本提供样本应用,这些样本应用还可演示如何使用 JNI 将 C 函数与 Java 结合。 这些应用区别在于其代码在 Java 和 C 之间的分区以及线程化的方式。 它们均使用 NDK 和本地 C 代码,但是,native-media样本应用的所有 OpenGL ES 代码都是在 Java 中完成,而 san-angelesnative-activity的所有 OpenGL ES 代码都是在 C 中完成,hello-gl2在 Java 和 C 之间分割了其 EGL 和 OpenGL ES 代码。我们应避免使用 hello-gl2样本,不仅因为上述的分割,而且因为它无法为 OpenGL ES 2.0 接口优先配置 GLSurfaceView,它负责调用 setEGLContextClientVersion(2)。 请参见表 6。

    表 6. NDK 中的 OpenGL ES 样本应用总结

    使用的 API示例应用SDK/NDK 分区
    OpenGL ES 1.1san-angeles所有 EGL 和 OpenGL ES 代码均为 C。
    OpenGL ES 1.1native-activity所有代码均为 C 并使用 NativeActivity 类。
    OpenGL ES 2.0hello-gl2EGL 设置位于 Java,OpenGL ES 代码为 C。
    OpenGL ES 2.0native-media所有的 EGL 和 OpenGL ES 代码均为 Java。

    虽然未使用 OpenGL ES,但是 bitmap-plasma样本也非常有趣,因为它示范了如何使用 jnigraphics库来执行本地函数,直接访问 Android 位图的像素。

    注: 您可以从 http://developer.android.com/tools/sdk/ndk/index.html下载 Android NDK。

    活动生命周期事件和线程化

    Android 要求所有对 OpenGL ES 的调用均从单线程执行,因为 EGL 环境仅可与单线程关联,非常不建议在主 UI 线程上渲染图形。 所以,最好的方法是专门为所有的 OpenGL ES 代码创建单独的线程并一直通过该线程执行。 如果您的应用使用了 GLSurfaceView,它可自动创建此专用 OpenGL ES 渲染线程。 在其他情况下,您的应用必须自己创建渲染线程。

    san-angelesnative-activity样本应用的所有 OpenGL ES 代码均在 C 中,但是 san-angeles使用了一些 Java 和 GLSurfaceView来创建渲染线程和管理活动生命周期,而 native-activity样本未使用任何 Java 代码。 不要使用 GLSurfaceView,因为它在 C 中管理活动生命周期;使用 NativeActivity类提供的渲染线程。 NativeActivity是 NDK 提供的一种使用便捷的类,支持您使用本地代码执行活动生命周期处理程序,如 onCreate()onPause()onResume()。 一些 Android 服务和内容提供商无法直接从本地代码访问,但可以通过 JNI 获得。

    native-activity样本适合导入 OpenGL ES 游戏,因为它示范了如何使用 NativeActivity类和 android_native_app_glue静态库用本地代码处理生命周期活动。 该类提供了一个单独的面向 OpenGL ES 代码的渲染线程、一个渲染接口和一个界面上的窗口,因此,您无需使用 GLSurfaceViewTextureView类。 该本地应用的主要入口点是 android_main(),它可在自己的线程中运行并拥有自己检索输入事件的事件循环。 遗憾的是,NDK 无法提供面向 OpenGL ES 2.0 的样本版本,但是您可以使用 2.0 代码在该样本中更换所有 1.1 代码。

    使用 NativeActivity的应用必须在 Android 2.3 或更高版本上使用,并在其清单文件中发出专门的声明,详见本文的第二部分。

    Java 原生接口

    如果您选择在 C/C++ 中实现大部分的应用,将很难避免为更大型且更专业的项目使用 Java 类。 例如,Android AssetManager 和 Resources API 仅可在 SDK 中使用,这是处理国际化和不同界面尺寸等的首选方式。 但是,JNI 可提供解决方法,因为它不仅允许 Java 代码调用 C/C++ 函数,还允许 C/C++ 代码调用 Java 类。 因此,虽然 JNI 会产生一些开销,但是请不要完全避免使用它。 它是访问仅可在 SDK 中使用的重要系统功能的最好方法 — 尤其当这些功能不是性能关键型功能时。 本文不对使用 JNI 进行完整介绍,但是下面列出了使用 JNI 从 Java 调用至 C/C++ 所需的三个基本步骤:

    1. 为 Java 类文件中的 C/C++ 函数添加一个声明作为本地类型。
    2. 为包含本地函数的共享对象库添加一个静态初始化器。
    3. 按照具体的命名方案向本地源文件中添加相应名称的功能。

    注: 关于借助 NDK 使用 JNI 的更多信息,请参阅 http://developer.android.com/training/articles/perf-jni.html

    选择 OpenGL ES 1.1 还是 2.0?

    您应在 Android 上使用哪个版本的 OpenGL ES? 1.0 版的 OpenGL ES 已被 1.1 版取代,因此,真正需要选择的是 1.1 版和 2.0 版。 Khronos 和 Google 可能无限定地支持两种版本,但是在大部分情况下,OpenGL ES 2.0 优于 1.1。 凭借其 OpenGL 着色语言(GLSL)ES 着色器编程特性,它功能更全面并可提供更高的性能。 甚至,它可能会需要更少的代码和内存来处理纹理。 但是,Khronos 和 Google 仍然继续支持 1.1 版的原因是,它更像台式机和控制台游戏世界数十年来一直使用的初始 OpenGL 1.x。 因此,将旧版的游戏移植到 OpenGL ES 1.1 比 2.0 更轻松;而且游戏版本越旧,这种情况越适用。

    如果移植的游戏没有着色器代码,则您可以选择 OpenGL ES 1.1 也可以选择 2.0,但是使用 1.1 版可能更轻松。 如果您的游戏已经有了着色器代码,那么肯定应该选择 OpenGL ES 2.0,尤其是考虑到近来的 Android 版本都大量地使用了 2.0 版。 根据 Google,截至 2012 年 10 月,访问 Google Play 网站的 Android 设备中有超过 90% 的设备支持 OpenGL ES 1.1 和 2.0 两种版本。

    注: 更多信息,请参阅 http://developer.android.com/about/dashboards/index.html

    结论

    You can implement graphics rendering on Android with OpenGL ES through the Android SDK, the NDK, or a combination of both using the JNI. SDK 方法需要在 Java 中编码,且最适合新应用的开发,但是 NDK 对以 C/C++ 移植传统 OpenGL 更实用。 大部分的游戏移植项目需要结合使用 SDK 和 NDK 两种组件。 对于新项目,应选择 OpenGL ES 2.0 而非 1.1 版 — 除非您的传统 OpenGL 代码太旧而无法使用任何 GLSL 着色器代码。

    该系列的第二部分将讨论您在开始这样的项目之前必须了解的移植 OpenGL 游戏存在的障碍,包括 OpenGL 扩展的差别、浮点支持、纹理压缩格式和 GLU 库。 此外,还介绍了借助 OpenGL ES 如何为英特尔凌动处理器设置 Android 的开发系统,以及如何获得 Android 虚拟设备模拟工具的最佳性能。

    关于作者

    Clay D. Montgomery 是在嵌入式系统上开发面向 OpenGL 的驱动程序和应用的主要开发人员。 他曾在 STB 系统、VLSI 技术、飞利浦半导体、诺基亚、德州仪器、AMX 以及作为独立顾问从事跨平台图形加速器硬件、图形驱动程序、APIs 和 OpenGL 应用的设计。 他曾参与 Freescale i.MX 和 TI OMAP* 平台以及 Vivante、AMD 和 PowerVR* 首个 OpenGL ES、OpenVG* 和 SVG 驱动程序和应用的开发。 他开发了在嵌入式 Linux 上开发 OpenGL ES 并开设了研讨班进行教授,且是 Khronos Group 多家公司的代表。

    更多相关信息

  • Developers
  • Android*
  • OpenGL*
  • Game Development
  • Intel® Atom™ Processors
  • Phone
  • URL
  • Who will get feedback?

    $
    0
    0

    Hi,

    Just curious, who will get feedback from dreamteam of gamedevelopers? Everyone, just a winners or some specific selection?

    Thx.

    Can we submit more than one game?

    $
    0
    0

    Are we able to submit more than one game demo?


    2014 Contest - quick stats

    $
    0
    0

    There were over 150 submissions to the contest this year!  WOOHOO!!!  They broke down by genre like this:


    Action 35 Adventure/Role Playing 18 Open/Other 44 Platformer 24 Puzzle/Physics 34

    I've followed everyone's download links, contacted a few that I had some questions on, and will be installing and getting the internal screeners lined up!  Lots of AWESOME looking games were submitted.  You have all seriously raised the bar on this competition over the last few years, and I'm very proud to be a part of it, and honored that you have taken the time to submit!

    Aloha!

    Mitch

     

    Using Intel® C++ Composer XE for Multiple Simple Random Sampling without Replacement

    $
    0
    0

    Introduction

    Random sampling is often used when pre- or post-processing of all records of the entire data set is expensive, as in the following examples. When the file of records or database is too large, retrieval cost for one record is too high. In further physical examination of the real-world entity described by a record, fiscal audit of financial records, or medical examinations of sampled patients for epidemiological studies, post-processing of one data record is too time-consuming. Random sampling is typically used to support statistical analysis of an entire data set and some aggregate statistic estimation (such as average), to estimate parameters of interest, or to perform hypothesis testing. Typical applications of random sampling are financial audit, fissile materials audit, epidemiology, exploratory data analysis and graphics, statistical quality control, polling and marketing research, official surveys and censuses, statistical database security and privacy, etc.

    Problem statement

    Definitions:

    • The population to be sampled is assumed to be a set of records (tuples) of a known size N.
    • A fixed-size random sample is a random sample for which the sample size is a specified constant M.
    • A simple random sample without replacement (SRSWOR) is a subset of the elements of a population where each element is equally likely to be included in the sample and no duplicates are allowed.

    We need to generate multiple fixed size simple random samples without replacement. Each sample is unbiased, i.e., item (record) in each sample was chosen from the whole population with equal probability 1/N, independently of others. All samples are independent.

    Note: We consider a special case of problems where all records are numbered using natural numbers from 1 to N, so we do not need access to population items themselves (or we have array of indexes of population items).

    In other words, we need to conduct a series of experiments, each generating a sequence of M unique random natural numbers from 1 to N (1≤M≤N).

    The attached program uses M=6 and N=49, conducts 119 696 640 experiments, generates a large number of result samples (sequences of length M) in the single array RESULTS_ARRAY, and uses all available parallel threads. In the program, we call each experiment a “lottery M of N”.

    Considered approaches to simulate one experiment

    Algorithm 1

    A straightforward algorithm to simulate one experiment is as follows:

               A1.1: let RESULTS_ARRAY be empty
    
                A1.2: for i from 1 to M do:
    
                    A1.3: generate random natural number X from {1,...,N}
    
                    A1.4: if X is already present in RESULTS_ARRAY (loop), then go to A1.3
    
                    A1.5: put X at the end of RESULTS_ARRAY
    
                End.
    

    In more detail, step A1.4 is the “for” loop of length i-1:

     

                A1.4.1: for k from 1 to i-1:
    
                A1.4.2: if RESULTS_ARRAY[i]==X, then go to A1.3
    

     

    Algorithm 2

    This algorithm uses the partial “Fisher-Yates shuffle” algorithm. Each experiment is treated as a partial length-M random shuffle of the whole population of N elements. It needs M random numbers. The algorithm is as follows:

                A2.1: (Initialization step) let PERMUT_BUF contain natural numbers 1, 2, ..., N
    
                A2.2: for i from 1 to M do:
    
                    A2.3: generate random integer X uniform on {i,...,N}
    
                    A2.4: interchange PERMUT_BUF[i] and PERMUT_BUF[X]
    
                A2.5: (Copy step) for i from 1 to M do: RESULTS_ARRAY[i]=PERMUT_BUF[i]
    
                End.
    

    Explanation: each iteration of the loop A2.2 works as a real lottery step. Namely, in each step, we extract random item X from remaining items in the bin PERMUT_BUF[i], ..., PERMUT_BUF[N] and put it at the end of the results row PERMUT_BUF[1],...,PERMUT_BUF[i]. The algorithm is partial because we do not generate full permutation of length N, but only a part of length M.

    At the cost of more memory and extra Initialization and Copy steps (loops), Algorithm 2 needs fewer random numbers than Algorithm 1, and does not have the second nested loop A1.4 with “if” branching. Therefore, we chose to use Algorithm 2.

    In the case of simulating many experiments, Initialization step is needed only once because at the beginning of each experiment, the order of natural numbers 1...N in the PERMUT_BUF array does not matter (like in real lottery).

    Note that in our C program (attached), zero-based arrays are used.

    Optimization

    We use Intel® C++ Compiler, with its OpenMP* implementation, and Intel® MKL shipped with Intel® Composer XE 2013 SP1.

    Parallelization

    We exploit all CPUs with all available processor cores by using OpenMP* (see “#pragma parallel for” in the code, and see [4] for more details about OpenMP usage).

    We use Intel® MKL MT2203 BRNG since it easily supports a parallel independent stream in each thread (see [3] for details).

         #pragma omp parallel for num_threads(THREADS_NUM)
    
         for( thr=0; thr<THREADS_NUM; thr++ ) { // thr is thread index
    
             VSLStreamStatePtr stream;
    
             // RNG initialization
    
            vslNewStream( &stream, VSL_BRNG_MT2203+thr, seed );
    
             ... // Generation of experiment samples (in thread number thr)
    
             vslDeleteStream( &stream );
    
         }
    

    Generation of experiment samples

    In each thread, we generate EXPERIM_NUM/THREADS_NUM experiment results. For each experiment we call Fisher_Yates_shuffle function that implements steps A2.2, A2.3, and A2.4 of the core algorithm to generate the next results sample. After that we copy the generated sample to RESULTS_ARRAY (step A2.5) as shown below:

         //  A3.1: (Initialization step) let PERMUT_BUF contain natural numbers 1, 2, ..., N
    
         for(i=0; i<N; i++) PERMUT_BUF[i]=i+1; // we will use the set {1,...,N}
    
         for(sample_num=0;sample_num<EXPERIM_NUM/THREADS_NUM;sample_num++) {
    
             Fisher_Yates_shuffle(...);
         
    
             for(i=0; i<M; i++)
    
                 RESULTS_ARRAY[thr*ONE_THR_PORTION_SIZE + sample_num*M + i] = PERMUT_BUF[i];
    
         }
    

    Fisher_Yates_shuffle function

    The function implements steps A2.2, A2.3, and A2.4  of the core algorithm (chooses a random item from the remaining part of PERMUT_BUF and places this item at the end of the output row, namely, to PERMUT_BUF[i]):

                for(i=0; i<M; i++) {
    
                    j = Next_Uniform_Int(...);            
    
                    tmp = PERMUT_BUF[i];
    
                    PERMUT_BUF[i] = PERMUT_BUF[j];
    
                    PERMUT_BUF[j] = tmp;
    
                }
    

     

    Next_Uniform_Int function

    In step A2.3 of the core algorithm, our program calls the Next_Uniform_Int function to generate the next random integer X, uniform on {i,...,N-1}.

    To exploit the full power of vectorized RNGs from Intel MKL, but to hide vectorization overheads, the generator must be called to generate a sufficiently large vector D_UNIFORM01_BUF of size RNGBUFSIZE that fits the L1 cache. Each thread uses its own buffer D_UNIFORM01_BUF and the index D_UNIFORM01_IDX pointing to after the random number from that buffer used last. In the first call to Next_Uniform_Int function (or in the case all random numbers from the buffer have been used), we regenerate the full buffer of random numbers again by calling to vdRngUniform function with the length RNGBUFSIZE and set the index D_UNIFORM01_IDX to zero (in fact, the index was already set to zero a while before):

     vdRngUniform( ... RNGBUFSIZE, D_UNIFORM01_BUF ... );
    

    Because Intel MKL provides only generators of random values with same distribution, but in step A2.3 we need random integers on different intervals, we fill our buffer with double-precision random numbers uniformly distributed on [0;1) and then, in the “Integer scaling step”, we convert these double-precision  values to the needed integer intervals. Fortunately, we know that our algorithm in step A2.3 will need this sequence of numbers, distributed as follows:

                number 0   distributed on {0,...,N-1}   = 0   + {0,...,N-1}
                number 1   distributed on {1,...,N-1}   = 1   + {0,...,N-2}
    
                ...
    
                number M-1 distributed on {M-1,...,N-1} = M-1 + {0,...,N-M}
    
                (then repeat previous M steps)
    
                number M     distributed on: see (0)
                number M+1   distributed on: see (1)
    
                ...
    
                number 2*M-1 distributed on: see (M-1)
                (then again repeat previous M steps)
                ...
    
                etc.
    

     

                Hence, “Integer scaling step” looks like this:

                // Integer scaling step
    
                for(i=0;i<RNGBUFSIZE/M;i++)
    
                    for(k=0;k<M;k++)
    
                        I_RNG_BUF[i*M+k] =
    
                            k + (unsigned int)(D_UNIFORM01_BUF[i*M+k] * (double)(N-1-k));
    

    Notes:

    • RNGBUFSIZE must be a multiple of M;
    • This double-nested loop is not suitable for good vectorization because M=6 is not a multiple of 8 (8 is the number of integers in the Intel® Advanced Vector Extensions (Intel® AVX) vector register);
    • Even if we interchange loops “for i” and “for k” and choose RNGBUFSIZE/M to be multiple of 8, this double-nested loop is not suitable for good vectorization, because we will store the results not contiguously in memory;
    • We put scaled integers I_RNG_BUF[i*M+k] into the same buffer where we put double-precision random values D_UNIFORM01_BUF[i*M+k]. Although depending on the CPU type, it may be preferable to have a separate buffer for integers, so that both buffers together fit L1 cache. Separate buffers allow to avoid store-after-load forwarding penalty stalls that might occur because the size of loaded double-precision values is not equal to the size of stored integers.

    Conclusions

    The attached, Intel C++ Composer XE based, implementation of the algorithm presented in this article for the case of 119 696 640 experiments of “lottery 6 of 49” runs ~24*13 times faster than the sequential algorithm based on the sequential scalar version using GNU* Scientific Library (GSL)+GNU Compiler Collection (GCC).

    Measured work time is:

    • 0.216 sec (algorithm presented in this article);
    • 69.321 sec (sequential scalar algorithm, based on GSL+GCC, i.e., using gsl_ran_choose function, sequential RNG gsl_rng_mt19937 from GSL, gcc 4.4.6 20110731 with options -O2 -mavx -I$GSL_ROOT/include -L$GSL_ROOT/lib -lgsl -lgslcblas).

    The measurements were done on the following platform:

    • CPU: 2 x 3d-Generation Intel® Core™ i7 processor 2.5GHz, 2*12 cores, 30MB L3 cache size, hyper-threading off;
    • OS: Red Hat Enterprise Linux* Server release 6.2, x86_64;
    • Software: Intel® C++ Composer XE 2013 SP1 (with Intel C++ Compiler 13.1.1 and Intel MKL 11.0.3).

    Program code attached (see lottery6of49.c file).

    References

    [1] D. Knuth. The Art of Computer Programming. Volume 2. Section 3.4.2 Random Sampling and Shuffling. Algorithm S, Algorithm P;

    [2] Intel® Math Kernel Library Reference Manual, available at https://software.intel.com/en-us/intel-software-technical-documentation?..., section “Statistical Functions”, subsection “Random Number Generators”;

    [3] Intel® MKL Vector Statistical Library Notes, available at https://software.intel.com/en-us/intel-software-technical-documentation?..., section “Independent Streams. Block-Splitting and Leapfrogging” about usage of several independent streams of VSL_BRNG_MT2203;

    [4] User and Reference Guide for the Intel® C++ Compiler, available at https://software.intel.com/en-us/intel-software-technical-documentation?..., section “Key Features”, subsection “OpenMP support”;

    [5] GNU Scientific Library (GSL), available at http://www.gnu.org/software/gsl, documentation section “18 Random Number Generation” about gsl_rng_alloc() and gsl_rng_mt19937 and subsection “20.38 Shuffling and Sampling” about gsl_ran_choose() function.

     

  • Simple Random Sampling without Replacement
  • Unique Random Numbers Generation
  • Developers
  • Partners
  • Professors
  • Students
  • Linux*
  • Business Client
  • Server
  • C/C++
  • Advanced
  • Beginner
  • Intermediate
  • Intel® C++ Compiler
  • Intel® Composer XE
  • Intel® Math Kernel Library
  • OpenMP*
  • Financial Services Industry
  • Game Development
  • Optimization
  • Server
  • URL
  • Code Sample
  • Compiler Topics
  • Improving performance
  • Libraries
  • Multithread development
  • Desenvolvendo jogos com MonoGame*

    $
    0
    0

    By Bruno Sonnino

    Download article as PDF

    Muitos desenvolvedores querem desenvolver jogos. E porque não? Jogos estão entre os mais vendidos na história da computação e as fortunas envolvidas no negócio de jogos continuam a atrair desenvolvedores. Como um desenvolvedor, eu com certeza gostaria de estar entre aqueles que desenvolveram o próximo Angry Birds* ou Halo*.

    Na prática, o desenvolvimento de jogos é uma das áreas mais difíceis do desenvolvimento de software. Você deve lembrar daquelas aulas de trigonometria, geometria e física que pensou que nunca usaria, e que passam a ser parte importante de um jogo. Além disso, seu jogo deve combinar som, vídeo e uma estória de uma maneira que o usuário queira jogar mais e mais. E isto antes de escrever uma única linha de código!

    Para facilitar as coisas, há diversos frameworks disponíveis para o desenvolvimento de jogos usando não somente C e C++, mas até C# ou JavaScript* (sim, você pode desenvolver jogos tridimensionais para seu browser usando HTML5 e JavaScript).

    Um destes frameworks é o Microsoft XNA*, construído sobre a tecnologia Microsoft DirectX*, que permite criar jogos para o Xbox 360*, Windows* e Windows Phone*. A Microsoft está descontinuando o XNA mas, enquanto isso, a comunidade open source introduziu um novo participante: MonoGame*.

    O que é o MonoGame?

    MonoGame é uma implementação open source da API (Application Programming Interface) XNA. Ele implementa a API XNA não apenas para Windows, mas também para Mac* OS X*, Apple iOS*, Google Android*, Linux* e Windows Phone. Isto significa que você pode desenvolver um jogo para qualquer uma destas plataformas com apenas pequenas modificações. Isto é uma característica fantástica: você pode criar jogos usando C# que podem ser portados facilmente para todas as maiores plataformas desktop, tablet ou smartphone. É um grande empurrão para quem quer conquistar o mundo com seus jogos.

    Instalando MonoGame no Windows

    Você nem precisa ter o Windows para desenvolver com MonoGame. Você pode usar o MonoDevelop* (uma IDE [Integrated Development Environment] open source para linguagens Microsoft .NET) ou Xamarin Studio*, uma IDE cross-platform desenvolvida pela Xamarin. Com estas IDEs, você pode desenvolver usando C# no Linux ou Mac.

    Se você é um desenvolvedor Microsoft .NET e usa o Microsoft Visual Studio* diariamente, como eu, você pode instalar o MonoGame no Visual Studio e usá-lo para criar seus jogos. Quando este artigo foi escrito, a última versão estável era a versão 3.2. Esta versão roda no Visual Studio 2012 e 2013 e permite que você crie um jogo desktop DirectX, que você irá precisar se quiser suportar toque no jogo.

    A instalação do MonoGame traz diversos modelos para o Visual Studio, que você pode usar para criar seus jogos, como mostrado na Figura 1.

    Figura 1. Novos modelos instalados pelo MonoGame*

    Para criar seu primeiro jogo, clique em MonoGame Windows Project e selecione um nome. O Visual Studio cria um novo projeto com todos os arquivos e referências necessárias. Se você executar este projeto, obterá algo como mostrado na Figura 2.

    Figura 2. Jogo criado com o modelo MonoGame*

    Sem graça, não? Somente uma tela azul clara, mas este é o início para qualquer jogo que você criar. Tecle Esc e a janela fecha.

    Você pode começar a escrever seu jogo com o projeto que tem agora, mas existe um porém: Você não poderá adicionar recursos, como imagens, sprites, sons ou fontes sem compilar eles em um formato compatível com o MonoGame. Para isso, você tem uma destas opções:

    • Instalar o XNA Game Studio 4.0.
    • Instalar o Windows Phone 8 software development kit (SDK).
    • Usar um programa externo, como o XNA content compiler.

    XNA Game Studio

    O XNA Game Studio tem tudo o que você precisa para criar jogos para Windows e Xbox 360. Ele também tem um compilador de conteúdo para compilar seus recursos para arquivos .xnb, que podem ser adicionados ao seu projeto MonoGame. Ele vem com a instalação apenas para o Visual Studio 2010. Se você não quer instalar o Visual Studio 2010 somente para isso, você pode instalar o XNA Game Studio no Visual Studio 2012 (veja o link na seção “Para Mais Informações” deste artigo).

    Windows Phone 8 SDK

    Você não pode instalar o XNA Game Studio diretamente no Visual Studio 2012 ou 2013, mas o SDK Windows Phone 8 pode ser instalado sem problemas nestas duas IDEs. Você pode usá-lo para criar um projeto para compilar seus recursos.

    XNA Content Compiler

    Se você não quer instalar uma SDK para compilar seus recursos, você pode usar o XNA content compiler (veja o link em “Para Mais Informações”), um programa open source que pode compilar seus recursos para arquivos .xnb, que podem ser usados no MonoGame.

    Criando seu Primeiro Jogo

    O jogo anterior que foi criado com o modelo MonoGame é o ponto inicial para todos os jogos. Você irá usar o mesmo processo em todos os jogos. Em Program.cs, você tem a função Main. Esta função inicializa e executa o jogo:

    static void Main()
    {
        using (var game = new Game1())
            game.Run();
    }

    Game1.csé o coração do jogo. Ali, você tem dois métodos que são chamados 60 vezes por Segundo em um loop: Update e Draw. Em Update, você recalcula os dados para todos os elementos no jogo; em Draw, você desenha estes elementos. Note que este é um loop muito estreito. Você tem 1/60 de Segundo, ou seja, 16.7 milissegundos para calcular e desenhar os dados. Se você levar mais tempo que isso, o programa pode pular alguns ciclos Draw e você irá ver falhas de desenho em seu jogo.

    Até recentemente, a entrada de dados para jogos em computadores desktop era o teclado e o mouse. A menos que o usuário tivesse comprado hardware extra, como volantes ou joysticks, você não poderia assumir que houvesse outros métodos de entrada. Com os novos equipamentos, como dispositivos Ultrabook™, Ultrabook 2 em 1, em PCs all-in-one, essas opções mudaram. Você pode usar entrada de toque e de sensores, dando aos usuários um jogo mais imersivo e realista.

    Para este primeiro jogo, iremos criar um jogo de chutes de pênaltis de futebol. O usuário irá usar toque para “chutar” a bola e o goleiro do computador irá tentar pegá-la. A direção e a velocidade da bola serão determinadas pelo toque do usuário. O goleiro irá escolher um canto e velocidade arbitrários para pegar a bola. Cada gol resulta em um ponto. Se não houver gol, o goleiro fica com o ponto.

    Adicionando conteúdo ao jogo

    O primeiro passo no jogo é adicionar conteúdo. Inicie adicionando o fundo do campo e a bola. Para fazer isso, crie dois arquivos .png: um para o campo de futebol (Figura 3) e outro para a bola (Figura 4).

     

    Figura 3.O campo de futebol

     

     

    Figura 4. A bola de futebol

    Para usar estes arquivos no jogo, você deve compilá-los. Se você está usando o XNA Game Studio ou o SDK Windows Phone 8, você deve criar um projeto de conteúdo XNA. Este projeto não precisa estar na mesma solução, você irá usá-lo apenas para compilar os recursos. Adicione os recursos a este projeto e compile-o. Em seguida, vá para o diretório destino e adicione os arquivos .xnb resultantes a seu projeto.

    Eu prefiro usar o XNA Content Compiler, pois este não requer um novo projeto e permite que você compile os recursos quando necessário. Abra o programa, adicione os arquivos à lista, selecione o diretório de saída e clique em Compile. Os arquivos .xnb estão prontos para serem adicionados ao projeto.

    Content Files

    Uma vez que os arquivos .xnb estão disponíveis, adicione-os à pasta Content de seu jogo. Você deve configurar a build action para cada arquivo como Contente a opção Copy to Output Directory como Copy if Newer. Se não fizer isso, você terá um erro ao tentar carregar os recursos.

    Crie dois campos para armazenar as texturas do campo e da bola:

    private Texture2D _backgroundTexture;
    private Texture2D _ballTexture;

    Estes campos são carregados no método LoadContent:

    protected override void LoadContent()
    {
        // Create a new SpriteBatch, which can be used to draw textures.
        _spriteBatch = new SpriteBatch(GraphicsDevice);
    
        // TODO: use this.Content to load your game content here
        _backgroundTexture = Content.Load<Texture2D>("SoccerField");
        _ballTexture = Content.Load<Texture2D>("SoccerBall");
    }

    Note que os nomes das texturas são os mesmos que o nome dos arquivos na pasta Content, mas sem a extensão.

    Em seguida, desenhe as texturas no método Draw:

    protected override void Draw(GameTime gameTime)
    {
        GraphicsDevice.Clear(Color.Green);
    
        // Set the position for the background
        var screenWidth = Window.ClientBounds.Width;
        var screenHeight = Window.ClientBounds.Height;
        var rectangle = new Rectangle(0, 0, screenWidth, screenHeight);
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
        // Draw the ball
        var initialBallPositionX = screenWidth / 2;
        var ínitialBallPositionY = (int)(screenHeight * 0.8);
        var ballDimension = (screenWidth > screenHeight) ?
            (int)(screenWidth * 0.02) :
            (int)(screenHeight * 0.035);
        var ballRectangle = new Rectangle(initialBallPositionX, ínitialBallPositionY,
            ballDimension, ballDimension);
        _spriteBatch.Draw(_ballTexture, ballRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    

    Este método limpa a tela com uma cor verde e desenha o fundo e a bola na marca do pênalti. O primeiro método spriteBatch Draw desenha o fundo redimensionado para o tamanho da janela, na posição 0,0; o segundo método desenha a bola na marca do pênalti, redimensionada proporcionalmente ao tamanho da janela. Não há movimento aqui, pois as posições não mudam. O próximo passo é movimentar a bola.

    Movendo a Bola

    Para mover a bola, você deve recalcular sua posição para cada iteração do loop e desenhá-la na nova posição. Faça o cálculo da nova posição no método Update:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition -= 3;
        _ballRectangle.Y = _ballPosition;
        base.Update(gameTime);
    
    }
    
    

    A posição da bola é atualizada em cada loop, subtraindo-se três pixels. Se você quiser fazer com que a bola se movimente mais rápido, você deve subtrair mais pixels. As variáveis _screenWidth, _screenHeight, _backgroundRectangle, _ballRectangle, e _ballPosition são campos privados, inicializados no método ResetWindowSize:

    private void ResetWindowSize()
    {
        _screenWidth = Window.ClientBounds.Width;
        _screenHeight = Window.ClientBounds.Height;
        _backgroundRectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
        _initialBallPosition = new Vector2(_screenWidth / 2.0f, _screenHeight * 0.8f);
        var ballDimension = (_screenWidth > _screenHeight) ?
            (int)(_screenWidth * 0.02) :
            (int)(_screenHeight * 0.035);
        _ballPosition = (int)_initialBallPosition.Y;
        _ballRectangle = new Rectangle((int)_initialBallPosition.X, (int)_initialBallPosition.Y,
            ballDimension, ballDimension);
    }

    Este método reinicializa todas as variáveis que dependem do tamanho da janela. Ele é chamado no método Initialize:

    protected override void Initialize()
    {
        // TODO: Add your initialization logic here
        ResetWindowSize();
        Window.ClientSizeChanged += (s, e) => ResetWindowSize();
        base.Initialize();
    }

    Este método é chamado em dois lugares diferentes: no início do processo e toda vez que o tamanho da janela muda. Initialize manipula ClientSizeChanged, de maneira que quando o tamanho da janela muda, as variáveis que dependem do tamanho da janela são reavaliadas e a bola é reposicionada para a marca do pênalti.

    Se você executar o programa, você verá que a bola move-se em uma linha reta, mas não para quando o campo termina. Você pode reposicionar a bola quando ela alcança o gol com o seguinte código:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition -= 3;
        if (_ballPosition < _goalLinePosition)
            _ballPosition = (int)_initialBallPosition.Y;
    
        _ballRectangle.Y = _ballPosition;
        base.Update(gameTime);
    
    }
    
    

    A variável _goalLinePositioné outro campo inicializado no método ResetWindowSize:

    _goalLinePosition = _screenHeight * 0.05;

    Você deve fazer outra mudança no método Draw: remover todo o código de cálculo de posições.

    protected override void Draw(GameTime gameTime)
    {
        GraphicsDevice.Clear(Color.Green);
    
       var rectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
        // Draw the ball
    
        _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    O movimento é perpendicular ao gol. Se você quiser que a bola se movimente num ângulo, crie um campo _ballPositionX incremente-o para movimentar para a direita ou decremente-o, para movimentar para a esquerda. Uma maneira melhor é usar um Vector2 para a posição da bola, como o seguinte:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition.X -= 0.5f;
        _ballPosition.Y -= 3;
        if (_ballPosition.Y < _goalLinePosition)
            _ballPosition = new Vector2(_initialBallPosition.X,_initialBallPosition.Y);
        _ballRectangle.X = (int)_ballPosition.X;
        _ballRectangle.Y = (int)_ballPosition.Y;
        base.Update(gameTime);
    
    }
    
    

    Se você executar o programa, verá que a bola se move com um ângulo (Figura 5). O passo seguinte é fazer a bola se mover quando o usuário “chuta” ela.

    Figura 5. Jogo com a bola em movimento

    Toque e Gestos

    Neste jogo, o movimento da bola deve iniciar com um “peteleco”. Este toque determina a direção e velocidade da bola.

    No MonoGame, você pode ter entrada de toque usando a classe TouchScreen. Você pode usar os dados brutos de entrada ou usar a API de gestos. Os dados brutos trazem mais flexibilidade, pois você pode processar os dados de entrada da maneira que deseja, enquanto que a API de gestos transforma os dados brutos em gestos filtrados, de maneira que você só recebe entrada para os gestos que deseja.

    Embora a API de gestos seja mais fácil de usar, há alguns casos em que ela não pode ser usada. Por exemplo, se você quer detectar um gesto especial, como um X ou gestos com mais de dois dedos, deverá usar os dados brutos.

    Para este jogo precisamos apenas do peteleco, e a API de gestos suporta isso, então iremos usá-la. A primeira coisa a fazer é indicar quais os gestos que você quer, usando a classe TouchPanel. Por exemplo, o código:

    TouchPanel.EnabledGestures = GestureType.Flick | GestureType.FreeDrag;

    . . . faz com que o MonoGame detecte e notifique apenas quando for feito um peteleco ou arrastado o dedo. Então, no método Update, você pode processar os gestos como da seguinte maneira:

    if (TouchPanel.IsGestureAvailable)
    {
        // Read the next gesture
        GestureSample gesture = TouchPanel.ReadGesture();
        if (gesture.GestureType == GestureType.Flick)
        {…
        }
    }
    
    

    Inicialmente, determine se um gesto está disponível. Se estiver, você pode chamar ReadGesture para obtê-lo e processá-lo.

    Iniciando o Movimento com Toque

    Habilite gestos de peteleco no jogo usando o método Initialize:

    protected override void Initialize()
    {
        // TODO: Add your initialization logic here
        ResetWindowSize();
        Window.ClientSizeChanged += (s, e) => ResetWindowSize();
        TouchPanel.EnabledGestures = GestureType.Flick;
        base.Initialize();
    }

    Até agora, a bola estava em movimento enquanto o jogo estava em execução. Use um campo privado, _isBallMoving, para dizer ao jogo quando a bola está em movimento. No método Update, quando o programa detecta um peteleco, você deve configurar _isBallMovingpara True, para iniciar o movimento. Quando a bola alcança a linha do gol, configure _isBallMoving para False e reposicione a bola:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        if (!_isBallMoving && TouchPanel.IsGestureAvailable)
        {
            // Read the next gesture
            GestureSample gesture = TouchPanel.ReadGesture();
            if (gesture.GestureType == GestureType.Flick)
            {
                _isBallMoving = true;
                _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
            }
        }
        if (_isBallMoving)
        {
            _ballPosition += _ballVelocity;
            // reached goal line
            if (_ballPosition.Y < _goalLinePosition)
            {
                _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
                _isBallMoving = false;
                while (TouchPanel.IsGestureAvailable)
                    TouchPanel.ReadGesture();
            }
            _ballRectangle.X = (int) _ballPosition.X;
            _ballRectangle.Y = (int) _ballPosition.Y;
        }
        base.Update(gameTime);
    
    }
    
    

    A velocidade da bola não é mais fixa: o programa usa o campo _ballVelocity para configurar a velocidade da bola nas direções x e y directions. Gesture.Delta retorna a variação de movimento desde a última atualização. Para calcular a velocidade do peteleco, multiplique este vetor pela propriedade TargetElapsedTime.

    Se a bola está se movendo, o vetor _ballPositioné incrementado pela velocidade (em pixels por frame) até que a bola alcança a linha do gol. O código a seguir:

    _isBallMoving = false;
    while (TouchPanel.IsGestureAvailable)
        TouchPanel.ReadGesture();

    . . . faz duas coisas: para a bola e remove todos os gestos da fila de entrada. Se você não fizer isso, o usuário poderá fazer gestos enquanto a bola se move, fazendo com que ela reinicie o movimento após ter parado.

    Ao executar o jogo, você pode dar petelecos na bola e ela se moverá na direção do peteleco, com a velocidade do gesto. Entretanto, temos um porém: o código não detecta onde o gesto ocorreu. Você pode dar petelecos em qualquer ponto da tela (não somente na bola) e a bola irá iniciar o movimento. Você poderia usar gesture.Position para detectar a posição do gesto, mas esta propriedade sempre retorna 0,0 e assim ela não pode ser usada.

    A solução é usar os dados brutos, obter a entrada de toque e ver se ela está próxima da bola. O código a seguir determina se a entrada de toque coincide com a bola. Se coincidir, configuramos o campo _isBallHit:

    TouchCollection touches = TouchPanel.GetState();
    
    if (touches.Count > 0 && touches[0].State == TouchLocationState.Pressed)
    {
        var touchPoint = new Point((int)touches[0].Position.X, (int)touches[0].Position.Y);
        var hitRectangle = new Rectangle((int)_ballPositionX, (int)_ballPositionY, _ballTexture.Width,
            _ballTexture.Height);
        hitRectangle.Inflate(20,20);
        _isBallHit = hitRectangle.Contains(touchPoint);
    }

    Assim, o movimento só inicia se o campo _isBallHité True:

    if (TouchPanel.IsGestureAvailable && _isBallHit)

    Se você executar o jogo, verá que o movimento da bola só começa se você der um peteleco nela. Ainda temos um problema aqui: se você atingir a bola muito devagar ou em uma direção que a bola não atinge a linha do gol, o jogo termina pois a bola não volta nunca para a posição inicial. Você deve configurar um timeout para o movimento da bola. Quando o tempo expirar, o jogo reposiciona a bola.

    O método Update tem um parâmetro: gameTime. Se você armazenar o valor de gameTimequando o movimento for iniciado, você poderá saber o tempo que a bola está em movimento e reinicializar o jogo quando este tempo expirar:

    if (gesture.GestureType == GestureType.Flick)
    {
        _isBallMoving = true;
        _isBallHit = false;
        _startMovement = gameTime.TotalGameTime;
        _ballVelocity = gesture.Delta*(float) TargetElapsedTime.TotalSeconds/5.0f;
    }
    
    ...
    
    var timeInMovement = (gameTime.TotalGameTime - _startMovement).TotalSeconds;
    // reached goal line or timeout
    if (_ballPosition.Y <' _goalLinePosition || timeInMovement > 5.0)
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _isBallMoving = false;
        _isBallHit = false;
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    Adicionando um Goleiro

    O jogo está funcionando, mas ele precisa de um elemento de dificuldade: você deve adicionar um goleiro que fica se mexendo depois que a bola é chutada. O goleiro é um arquivo .png que é compilado pelo XNA Content Compiler (Figura 6). Você deve adicionar este arquivo compilado à pasta Content, configurar a opção build action para Content, e configurar Copy to Output Directory para Copy if Newer.

    Figura 6.O goleiro

    O goleiro é carregado no método LoadContent:

    protected override void LoadContent()
    {
        // Create a new SpriteBatch, which can be used to draw textures.
        _spriteBatch = new SpriteBatch(GraphicsDevice);
    
        // TODO: use this.Content to load your game content here
        _backgroundTexture = Content.Load<Texture2D>("SoccerField");
        _ballTexture = Content.Load<Texture2D>("SoccerBall");
        _goalkeeperTexture = Content.Load<Texture2D>("Goalkeeper");
    }

    Você deve desenhar ele no método Draw:

    protected override void Draw(GameTime gameTime)
    {
    
        GraphicsDevice.Clear(Color.Green);
    
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, _backgroundRectangle, Color.White);
        // Draw the ball
        _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
        // Draw the goalkeeper
        _spriteBatch.Draw(_goalkeeperTexture, _goalkeeperRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    _goalkeeperRectangle contém o retângulo do goleiro na janela. Ele é mudado no método Update:

    protected override void Update(GameTime gameTime)
    {…
    
       _ballRectangle.X = (int) _ballPosition.X;
       _ballRectangle.Y = (int) _ballPosition.Y;
       _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
                        _goalKeeperWidth, _goalKeeperHeight);
       base.Update(gameTime);
    }
    
    

    Os campos _goalkeeperPositionY, _goalKeeperWidth, e _goalKeeperHeight são atualizados no método ResetWindowSize:

    private void ResetWindowSize()
    {…
        _goalkeeperPositionY = (int) (_screenHeight*0.12);
        _goalKeeperWidth = (int)(_screenWidth * 0.05);
        _goalKeeperHeight = (int)(_screenWidth * 0.005);
    }
    
    

    A posição inicial do goleiro é no meio da tela, perto da linha do gol:

    _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth)/2;

    O goleiro inicia o movimento ao mesmo tempo em que a bola. Ele se move de um lado para outro em um movimento harmônico. Esta senoide descreve seu movimento:

    X = A * sin(at + δ)

    Aé a amplitude do movimento (a largura do gol), t o tempo do movimento, e a e δ são coeficientes aleatórios (isso fará com que o movimento seja aleatório, de modo que o usuário não possa prever a velocidade e o canto que o goleiro irá tomar).

    Os coeficientes são calculados quando o usuário chuta a bola:

    if (gesture.GestureType == GestureType.Flick)
    {
        _isBallMoving = true;
        _isBallHit = false;
        _startMovement = gameTime.TotalGameTime;
        _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
        var rnd = new Random();
        _aCoef = rnd.NextDouble() * 0.005;
        _deltaCoef = rnd.NextDouble() * Math.PI / 2;
    }

    O coeficiente aé a velocidade do goleiro, um número entre 0 e 0.005 que representa uma velocidade entre 0 e 0.3 pixels/segundos (máximo de 0.005 pixels em 1/60 de segundo). O coeficiente δé um número entre 0 e pi/2. Quando a bola está se movendo, você muda a posição do goleiro:

    if (_isBallMoving)
    {
        _ballPositionX += _ballVelocity.X;
        _ballPositionY += _ballVelocity.Y;
        _goalkeeperPositionX = (int)((_screenWidth * 0.11) *
                          Math.Sin(_aCoef * gameTime.TotalGameTime.TotalMilliseconds +
                          _deltaCoef) + (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11);…
    }
    

    A amplitude do movimento é _screenWidth A amplitude do movimento é (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11 ao resultado, de modo que o goleiro se movimente em frente ao gol. Agora é hora de fazer o goleiro pegar a bola.

    Teste de Colisão

    Se você quiser saber se o goleiro pega a bola, você deve saber se o retângulo da bola intercepta o retângulo do goleiro. Você faz isso no método Update, depois de calcular os dois retângulos:

    _ballRectangle.X = (int)_ballPosition.X;
    _ballRectangle.Y = (int)_ballPosition.Y;
    _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
        _goalKeeperWidth, _goalKeeperHeight);
    if (_goalkeeperRectangle.Intersects(_ballRectangle))
    {
        ResetGame();
    }

    é somente uma refatoração do código que reinicializa o jogo ao estado inicial:

    private void ResetGame()
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
        _isBallMoving = false;
        _isBallHit = false;
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    Com este código simples, o jogo sabe se o goleiro pegou a bola. Agora, você precisa saber se a bola atingiu o gol. Você faz isso quando a bola passa da linha do gol.

    var isTimeout = timeInMovement > 5.0;
    if (_ballPosition.Y < _goalLinePosition || isTimeout)
    {
        bool isGoal = !isTimeout &&
            (_ballPosition.X > _screenWidth * 0.375) &&
            (_ballPosition.X < _screenWidth * 0.623);
        ResetGame();
    }

    A bola deve estar completamente dentro do gol, a sua posição deve estar entre o primeiro poste do gol (_screenWidth * 0.375) e o segundo poste (_screenWidth * 0.625 − _screenWidth * 0.02). Agora é hora de atualizar o placar do jogo.

    Adicionando um Placar

    Para adicionar um placar ao jogo, você deve adicionar um novo recurso: um spritefont com o fonte usado no jogo. Um spritefont um arquivo .xml descrevendo o fonte—a família do fonte, seu tamanho e peso, juntamente com outras propriedades. Em um jogo, você pode usar um spritefont como esse:

    <?xml version="1.0" encoding="utf-8"?><XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"><Asset Type="Graphics:FontDescription"><FontName>Segoe UI</FontName><Size>24</Size><Spacing>0</Spacing><UseKerning>false</UseKerning><Style>Regular</Style><CharacterRegions><CharacterRegion><Start> </Star><End></End></CharacterRegion></CharacterRegions></Asset></XnaContent>

    Você deve compilar este arquivo .xml com o XNA Content Compiler e adicionar o arquivo .xnb resultante à pasta Content do projeto; configure a opção build action para Content e Copy to Output Directory para Copy if Newer. O fonte é carregado no método LoadContent:

    _soccerFont = Content.Load<SpriteFont>("SoccerFont");

    Em ResetWindowSize, reinicialize a posição do placar:

    var scoreSize = _soccerFont.MeasureString(_scoreText);
    _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);

    Para manter o resultado do jogo, declare duas variáveis: _userScore e _computerScore. A variável _userScoreé incrementada quando acontece um gol e _computerScoreé incrementado quando a bola vai para fora, o tempo expira ou o goleiro pega a bola:

    if (_ballPosition.Y < _goalLinePosition || isTimeout)
    {
        bool isGoal = !isTimeout &&
                      (_ballPosition.X > _screenWidth * 0.375) &&
                      (_ballPosition.X < _screenWidth * 0.623);
        if (isGoal)
            _userScore++;
        else
            _computerScore++;
        ResetGame();
    }
    …
    if (_goalkeeperRectangle.Intersects(_ballRectangle))
    {
        _computerScore++;
        ResetGame();
    }
    
    

    ResetGame recria e reposiciona o placar:

    private void ResetGame()
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
        _isBallMoving = false;
        _isBallHit = false;
        _scoreText = string.Format("{0} x {1}", _userScore, _computerScore);
        var scoreSize = _soccerFont.MeasureString(_scoreText);
        _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    O método _soccerFont.MeasureString mede o string usando o fonte selecionado. Você irá usar esta medida para calcular a posição do placar. O placar será desenhado no método Draw:

    protected override void Draw(GameTime gameTime)
    {
    …
        // Draw the score
        _spriteBatch.DrawString(_soccerFont, _scoreText,
             new Vector2(_scorePosition, _screenHeight * 0.9f), Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    Ligando as Luzes do Estádio

    Como um toque final, o jogo liga as luzes do estádio quando o nível de luz no ambiente está baixo. Os novos dispositivos Ultrabook e 2 em 1 tem, em geral, sensores de luz que você pode usar para determinar quanta luz há no ambiente e mudar a maneira que o fundo é desenhado.

    Para aplicações desktop, você pode usar o Windows API Code Pack para o Microsoft .NET Framework, uma biblioteca que permite acessar recursos dos sistemas operacionais Windows 7 e mais novos. Para este jogo, iremos usar um outro caminho: as APIs de sensores WinRT. Embora elas tenham sido escritas para Windows 8, elas também estão disponíveis para aplicações desktop e podem ser usadas sem mudanças. Usando-as, você pode portar sua aplicação para Windows 8 Store sem mudar uma única linha de código.

    O Intel® Developer Zone (IDZ) tem um artigo sobre como usar as APIs WinRT em uma aplicação desktop (veja a seção “Para Mais Informações”). Baseado nesta informação, você deve selecionar o projeto no Solution Explorer, dar um clique com o botão direito e selecionar Unload Project. Então, clique com o botão direito novamente e clique em Edit project. No primeiro PropertyGroup, adicione um rótulo TargetPlatFormVersion:

    <PropertyGroup><Configuration Condition="'$(Configuration)' == ''">Debug</Configuration>
    …
      <FileAlignment>512</FileAlignmen><TargetPlatformVersion>8.0</TargetPlatformVersion></PropertyGroup>

    Clique com o botão direito novamente e então clique em Reload Project. Visual Studio recarrega o projeto. Quando você adicionar uma nova referência ao projeto, você poderá ver a aba Windows no gerenciador de referências, como mostra a Figura 7.

    Figura 7.A aba Windows* no Gerenciador de Referências

    Adicione a referência Windows ao projeto. Você também deve adicionar uma referência a System.Runtime.WindowsRuntime.dll. Se não puder encontrar este assembly na lista, você pode navegar para a pasta .Net Assemblies.Na minha máquina, esta pasta está em C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETCore\v4.5.

    Agora, você pode escrever código para detector o sensor de luz:

    LightSensor light = LightSensor.GetDefault();
    if (light != null)
    {

    Se o sensor de luz estiver presente, o método GetDefault retorna uma variável não nula que você pode usar para detector variações de luz. Você pode fazer isto manipulando o evento ReadingChanged, como a seguir:

    LightSensor light = LightSensor.GetDefault();
    if (light != null)
    {
        light.ReportInterval = 0;
        light.ReadingChanged += (s,e) => _lightsOn = e.Reading.IlluminanceInLux < 10;
    }

    Se a leitura estiver abaixo de 10, a variável _lightsOné True, e você pode usá-la para desenhar o fundo de outra maneira. Se você olhar o método Draw de spriteBatch,verá que o terceiro parâmetro é uma cor. Até agora, você usou apenas branco. Esta cor é usada para colorir o bitmap. Se usar branco, as cores do bitmap permanecem as mesmas; se usar preto, o bitmap será todo preto. Qualquer outra cor colore o bitmap. Você pode usar esta cor para ligar as luzes, usando uma cor verde quando as luzes estão desligadas e branco quando ligadas. No método Draw, mude o desenho do fundo:

    _spriteBatch.Draw(_backgroundTexture, rectangle, _lightsOn ? Color.White : Color.Green);

    Agora, quando você executa o programa, verá um fundo verde escuro quando as luzes estão desligadas e verde claro quando estão ligadas (Figura 8).

    Figura 8.O jogo finalizado

    Agora você tem um jogo completo. Sem dúvida, ele não está acabado – ele necessita ainda muito polimento (animações quando acontece um gol, bola retornando quando o goleiro pega a bola ou atinge um poste)—mas eu deixo isto como lição de casa para você. O passo final é portar o jogo para Windows 8.

    Portando o Jogo para Windows 8

    Portar um jogo MonoGame para outras plataformas é fácil. Você deve apenas criar um novo projeto na solução, do tipo MonoGame Windows Store Project, e apagar o arquivo Game1.cse adicionar os quatro arquivos .xnb da pasta Content da app Windows Desktop para a pasta Content do novo projeto. Você não irá adicionar novas cópias dos arquivos, mas irá adicionar links para os arquivos originais. No Solution Explorer, clique com o botão direito na pasta Content do novo projeto, clique em Add/Existing Files,selecione os quatro arquivos .xnb do projeto Desktop, clique na seta para baixo ao lado do botão Add e selecione Add as link. O Visual Studio adiciona os quatro links.

    Em seguida, adicione o arquivo Game1.cs do velho projeto ao novo. Repita o procedimento que você fez com os arquivos .xnb: clique com o botão direito no projeto, clique em Add/Existing Filese selecione Game1.csdo outro projeto, clique na seta para baixo ao lado do botão Add e clique em Add as link.A última mudança a fazer está em Program.cs, aonde você deve mudar o namespace para a classe Game1, porque você está usando a classe Game1 do projeto desktop.

    Pronto—você criou um jogo para Windows 8!

    Conclusão

    Desenvolver jogos é uma tarefa difícil por si só. Você terá que lembrar suas aulas de geometria, trigonometria e física e aplicar todos aqueles conceitos ao desenvolvimento do jogo (não seria ótimo se os professores usassem jogos ao ensinar estes assuntos?).

    MonoGame facilita um pouco esta tarefa. Você não precisa usar o DirectX, você pode usar o C# para desenvolver seus jogos e você tem acesso completo ao hardware. Toque, som e sensores estão disponíveis para seus jogos. Além disso, você pode desenvolver um jogo e portá-lo, com pequenas alterações para Windows 8, Windows Phone, Mac OS X, iOS, ou Android. Este é um bônus real quando você quer desenvolver jogos multiplataforma.

    Para Mais Informações

    Sobre o Autor

    Bruno Sonnino é um Microsoft Most Valuable Professional (MVP) no Brasil Ele é um desenvolvedor, consultor e autor e escreveu 5 livros de Delphi, publicados em Português pela Pearson Education Brazil e muitos artigos para revistas e web sites brasileiros e americanos.

  • Monogame
  • XNA
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Game Development
  • Laptop
  • URL
  • Qbex e Intel querem conhecer Apps com temática Infanto-Juvenil!

    $
    0
    0




    Objetivo: A Qbex (indústria brasileira instalada no maior pólo tecnológico de Informática do Brasil) está interessada em conhecer aplicativos Android, com temática Infanto-Juvenil, compatíveis com X86 desenvolvidos pelos Parceiros de Software Intel.

    Detalhes: Para está oportunidade, a Qbex está procurando por Games, Apps e conteúdo com temática infantil que usem arquitetura Intel® e rodem em tablets de 7’’. Com preferência para games educativos.

    Benefícios: Todos os apps que forem escolhidos (vide requerimentos e processo logo abaixo) poderão participar com a Intel na campanha da Qbex. O impacto inicial da campanha visa ate 8 mil devices através de uma campanha online e offline… Além de ações com promotores nos pontos de venda. 

    Requerimentos: As desenvolvedoras interessadas em participar deverão enviar um email intitulado TABLET KIDS, com uma apresentação sobre o aplicativo e um link para download da versão. Se o aplicativo já fizer parte do Showroom da Intel, a empresa só precisa enviar a apresentação e o link do Showroom no corpo do email.

    Datas: O período de escolha destes aplicativos inicia-se no dia 6 de junho e vai até o dia 01 de Julho.

    Processo: Os aplicativos serão apresentados para o time de marketing e business da Intel e da Qbex e será enviado um email para todas as empresas escolhidas com as orientações necessárias para a participação da campanha.

    Emails de contato: Raul Miranda (notebook@qbex.com.br) e Juliano Alves (juliano.alves@intel.com).

    Mais: Só serão aceitos submissões de aplicativos Freemium desenvolvidos no Brasil por empresas já cadastradas no programa de parceira de software Intel®. Para se cadastrar acesse: https://software.intel.com/pt-br/grow-business-reports


    Acesse também o Brazil Software Partners e conheça a página exclusiva de oportunidades e ações para as empresas do Brasil. 

  • business marketing
  • business opportunities
  • negócios
  • negócios e marketing
  • android game
  • Developers
  • Intel AppUp® Developers
  • Partners
  • Professors
  • Students
  • Android*
  • Android*
  • Android* Development Tools
  • Development Tools
  • Education
  • Game Development
  • Tablet
  • URL
  • Game Optimization for Ultrabook™ Devices

    $
    0
    0

    Download as PDF

    By Lee Bamber

    1. Introduction

    Recently, I had the task of preparing a game engine I’ve been working on for the Games Developer Conference. Given the importance of the event, I needed my game to run fast on the three devices I had in hand, which ranged from the current Ultrabook™ technology to a system two generations old.

    In this paper you will learn how to improve the speed of your 3D game and understand what to look out for when porting your application to Ultrabook systems. Whether you are an experienced game developer or a hobby coder getting into the industry, you will no doubt appreciate the importance of performance. A game that runs at a super smooth frame rate will feel polished and professional compared to a game staggering along at a measly five frames per second (FPS). No amount of gorgeous graphics will disguise the fact your game lurches along, tears the screen as it continually misses the monitor’s vertical sync step, and sends your game physics into pure pandemonium. With this case study of an actual game project port, I hope you will gain insight into the real-world problems you may encounter and possible solutions.


    Figure 1:You may gain sales with your screen shot, but you’ll pay the price online if your FPS is low!

    This article highlights a few of the common causes of performance loss and specifically helps game developers move a typical high-end AAA 3D game title to the Ultrabook device with the performance demanded by modern audiences. Such titles often require a high-end discrete graphics card to work well and put extremely high demands on the GPU. Understanding the architectural differences between a dedicated and integrated GPU can help, but the very best method of improving graphics performance is by analyzing the pipeline for bottlenecks and optimizing those areas without adversely affecting visual quality.

    You should have a basic understanding of graphics API calls in general, a familiarity with the components that make up a typical 3D game, and some knowledge or use of an Ultrabook.

    2. Why Is Performance Important?

    As the market for applications and games becomes increasingly crowded, the unique selling points for your product become ever more crucial for commercial success, and performance today is not just desirable but absolutely essential. Many users would not even consider your game as finished until it ran smoothly and consistently on their device, and would not bother to play the game beyond an initial negative experience.

    Given the crucial importance of this requirement and the fact that mobile, tablet, and portable computing is rapidly growing, you can appreciate that performance is critical. You might be complacent when adapting your game to the Ultrabook given its exceptional power over these other devices, but users will demand the highest standard and expect a high-end gaming experience.


    Figure 2:Ultrabook™ Systems pack a powerful punch in the right hands

    From a skill development point of view, everything you can do to optimize and improve your game code now becomes a vital lesson that can be applied to future projects, making you a better game developer.

    3. Why Optimize?

    Many developers use a desktop PC system to create and test their 3D games, and the presence of a dedicated graphics card can sometimes create a sense of abundance, resulting in algorithms and shaders that push the very limits of what is possible on the GPU. When you run this game on a more limited platform, it may not perform as expected and result in a dramatic reduction in performance. Ultrabooks are amazingly powerful mobile devices, but they do not provide the same level of brute force rendering available on next-gen, high-end GPUs. In addition, Ultrabooks are designed to be used on the go, so your game may very well find itself running on battery power, requiring an efficient rendering pipeline to prevent rapid power loss. Your approach to creating in-game visuals must respect these facts.


    Figure 3:The many destinations of a successful app

    When developing an application, developers traditionally start at the top, and trim their way down to run on as many devices as is practical in the available time.

    Developing on the Ultrabook and porting your game to a desktop powered by a dedicated graphics card would be the easiest route to take, as this virtually eliminates the need to port. However, you may find yourself competing with games that have set the quality bar substantially higher. This approach does have one advantage: you are conscious of battery life from the very beginning, and therefore, you are more likely to develop a 3D game that dials down intensive activity at specific moments in the game such as title screens and HUD pages. Developing on a desktop and optimizing down to the Ultrabook is more common and generally yields a higher level of quality as your original development philosophy aims high and then works out how to deliver it on more form factors.

    4. Desktop to Ultrabook – A Case Study in Performance

    My story begins many weeks before the big GDC event, running my game on a relatively modern PCI Express* 3.0 graphics card worth about $200 and getting 60 FPS with visual settings set to the highest quality. It was by no means a high-end gaming rig, but it was capable of running any 3D game at the highest settings with no noticeable lag and packed a mean punch with its six cores, 6 GB of system memory, and an array of super-fast SSD drives. I knew there would be no desktop systems waiting for me at the event, and I did not want to lug a huge PC system half way around the world with me. Naturally, the solution was to take my Ultrabook, the next most powerful device I owned and more than capable of putting on a good show.


    Figure 4:GDC 2014 – One of the biggest developer conferences…but no pressure

    My Ultrabook has a 4th generation Intel® Core™ processor with Intel® HD Graphics 4000™, and is my device of choice when away from the office. My initial test was painful, dropping so many frames that the whole endeavor seemed far too ambitious. The current build of the 3D game engine relied heavily on shaders and multiple targets for rendering, gobbling up CPU cycles like candy and running everything as fast and as loud as it could. As you can imagine, such a beast was a million miles away from the power-conscious and friendly apps you want on a portable device.

    Despite the audaciousness of the plan, I also knew that modern Ultrabooks are very capable gaming systems and when used correctly could match the desktop for productivity and hands down beat it for convenience. I also played many games that ran great on Ultrabooks, and the mission was not impossible, so I set to work to get the FPS up to the needed 60—my goal for the GDC event.

    As an old-school coder, I learned to program long before the arrival of performance analyzers and graphics debuggers, so my primary method of detecting bottlenecks is to remove huge chunks of the engine until the performance improved. By selectively re-introducing vital chunks of code back in, I could determine which parts of the engine were slowest. Once the bottlenecks are identified, and as it was not an option to simply remove them altogether, the careful process of reducing the intensity of the component could begin. Typical examples are skipping normal map calculations in the shader for pixels beyond a certain range from the player, or skipping A.I update calls every other cycle to reduce the overhead of these processes. Cumulatively, these small improvements start to add up and before long the game engine is running at full speed again with hardly any loss in visual quality.

    For coders new to the world of performance tuning, I would heartily recommend you avoid this method of detecting bottlenecks. Numerous tools are available to help you identify performance problems in your application, which not only provide the location of the bottleneck but the nature of the issue. One such set of free tools is the Intel® Graphics Performance Analyzers, which profiles your application as it runs and gives you a snapshot of what your program is doing and how long it’s taking to do it. While demonstrating the game at the event, I found a few issues that I later fixed to improve performance and smoothness of the final result.


    Figure 5:Before & After – Screen Shots of the game before and after optimizations

    As you can see in figure 5, I went from 20 fps to 62 fps with only minor visual differences in the before and after scenes. The ‘after’ shot shows the removal of the strong dynamic lighting around the player and a less aggressive fragment shader.

    Hungry Shaders

    It did not take us long to realize that the biggest drain on our performance was in our graphics rendering step.


    Figure 6:Performance Metrics Panel from the original low FPS version

    As you can see in Figure 6 the horizontal bar marked in the panel as ‘Rendering’ consumed most of our available cycles, and when we drilled down to the fine detail, it was apparent that rendering the objects to the screen was very costly. From here, it was a short step to realize that a scene rendering hundreds of thousands of polygons, each one using a heavy-duty fragment shader, contributes greatly to a loss in performance. Just how much was it costing? By adding MEDIUM and LOWEST techniques to the shader and scaling back the visual eye candy per-pixel, we gained a factor of six in performance improvement.

    To settle on what LOWEST and MEDIUM actually do, we first had to determine the lowest common denominator of features for the game. By figuring out which features where absolutely essential for playing the game and then disregarding whatever remained, I could create the new LOWEST technique within the shader. Early on, this technique was amazingly simple, with almost all elements removed including all shadows, normal mapping, dynamic lighting, texture overlays, specular mapping, and so on. By starting at near-zero, it was possible to run the game and see what the ‘best case’ scenario was for this shader running on the Ultrabook. When I compared a screen shot from the HIGHEST setting to one from the LOWEST setting, I saw the most important missing ingredient that would cause users distress when they reduced the setting. The least subtle elements in the shader were shadows and texture overlays, each of which created a dramatic reduction in quality when absent. Adding overlays back in was relatively inexpensive and I could test the cost by simply adding the shader code for this element back in and running the game again. Shadows on the other hand extolled a high price, both in their generation in another part of the engine and their use within the shader itself. Given the importance of this aspect to preserve visual quality, time was spent investigating various approaches until a faster solution was found, which I’ll detail below.

    Producing the MEDIUM technique setting for the shader was a little easier and simply involved writing a shader between the highest and lowest settings, yet always preferring to err on the side of performance. The intent with this setting was to allow all the speed benefits of the lowest setting but include the less costly effects such as player flash light, dynamic lighting, and slightly better shadows.

    Had I simply removed all visual quality from the lowest setting, I could have achieved almost all the performance improvement required in one go, but gamers dislike poor graphics almost as much as poor performance. By making an effort to preserve 90% of the visual fidelity of the highest setting, and prioritizing which aspects could be reduced or eliminated, I achieved a significant improvement with minimal loss in visual quality. Moving from 5 FPS to over 40 FPS was my single biggest improvement.

    When investigating why your desktop game is running so slow on an Ultrabook, I highly recommend you dismantle your graphics rendering pipeline and ask some serious questions about where the time is being spent. You can try my method of butchery and remove whole slabs of functionality until your pipeline improves, or you can opt for a more sophisticated approach and use a performance analyzer tool. Whatever method you choose, once the issue has been located your next most critical task is to arrive at a solution that not only improves the speed of that element but does so without sacrificing visual quality.

    To provide some inspiration for the work required to find these optimal solutions, here are a few of the techniques I devised to solve some of the bottlenecks I discovered.

    Cheaper Shadows

    To solve the shadow issue mentioned above, I had to look for alternatives to a technique called Cascade Shadow Mapping. The technique will not be discussed here in detail, but you can find more information here: http://msdn.microsoft.com/en-gb/library/windows/desktop/ee416307(v=vs.85).aspx. The basic premise is that four render targets are drawn with the shadows of all objects immediately within view of the player camera, each one at a different level of detail.


    Figure 7:Cascade Shadow Mapping – a debug view from the game engine

    A shader is then instructed to re-color a pixel on screen based on whether it falls within the shadows previously calculated. The problem is that this is an intense shader effect and requires a lot of video memory. You will notice in the ‘fragment shader’ code below, the IF branch statement is being used several times, and some GPU hardware will incur a penalty in performance for each IF branch used.  In extreme cases, some systems will compute every permutation of pixel output meaning there is no benefit to branching over code.

    fPercentLit = 0.0f;
    if ( iCurrentCascadeIndex==0 )
    {
    fPercentLit += vShadowTexCoord.z > tex2D(DepthMap1,float2(vShadowTexCoord.x,vShadowTexCoord.y)).x ? 1.0f : 0.0f;
    }
    else
    {
    	if ( iCurrentCascadeIndex==1 )
    	{
    		fPercentLit += vShadowTexCoord.z > tex2D(DepthMap2,float2(vShadowTexCoord.x,vShadowTexCoord.y)).x ? 1.0f : 0.0f;
    	}
    	else
    	{
    		if ( iCurrentCascadeIndex==2 )
    		{
    			fPercentLit += vShadowTexCoord.z > tex2D(DepthMap3,float2(vShadowTexCoord.x,vShadowTexCoord.y)).x ? 1.0f : 0.0f;
    		}
    		else
    		{
    			if ( iCurrentCascadeIndex==3 && vShadowTexCoord.z<1.0 )
    			{
    				fPercentLit += vShadowTexCoord.z > tex2D(DepthMap4,float2(vShadowTexCoord.x,vShadowTexCoord.y)).x ? 1.0f : 0.0f;
    			}
    		}
    	}
    }

    It’s important that the video memory requirement and the dependence on the IF branch statements be reduced. The solution (of which there are many) is to create a single large shadow mega-texture and deposit the results of the lowest level of detail shadow into this target.

    A new cheaper shader technique was written to simply read from this shadow mega-texture without needing a single IF statement. Again the specifics of this technique go beyond the scope of this article, but the underlying practise of first identifying the cause of a performance drop and then creating a second technique to produce a similar visual look without the cost is a sound strategy.

    Maintaining Visual Fidelity

    One thing to keep in mind as you optimize your engine is to protect the visual quality of your game at every stage of development. It’s easy to simply hack away beautiful yet expensive effects for the sake of performance, but it’s more rewarding to treat each issue as an opportunity to gain better performance while retaining the visual quality your game needs. Not only will you achieve the results you are after, but your game will run even better on higher-end systems, which of course means you can add even more features as your game scales up.


    Figure 8:Comparison of a game scene when you reduce the visual quality too much

    When you are developing on a desktop, you will be tempted to use clever and sophisticated fragment shaders to create all manner of surface effects and simply removing them for a low-end technique would destroy the appearance of the final image to a point where it no longer resembles the original. Maintaining a consistent visual style across all shader techniques is vital if you want to retain the integrity of your game. New users, impressed with a stunning screen shot in an online magazine, will be mighty disappointed when they run your game and see something significantly different.

    Where possible, look for techniques that reproduce the high-end shader effect using low-tech techniques such as pre-baked textures, or even better, limit the expensive pixel effects to an area close to the player.

    Spend the Most on Those Closest To You

    Sounds like good family advice, but it’s a good strategy when making shaders look great on Ultrabooks. With a single IF branch statement, you can determine if the pixel being calculated is close to the player or not. If so, you can use the expensive high-end shader pixel effect as before, and beyond that range you can revert to a cheaper baked or faked effect.


    Figure 9:The blending effect in action, notice the normal map effects up close

    A good technique to use in concert with the above is blending, and for the price of an extra IF branch, you can also check if the pixel is between two range points. At the closest two ranges, you use the expensive effect, and beyond the closest range point, you calculate the cheap effect. Between the first and second closest range points, you calculate a blended transition between the two results. It is important to note here that the range between these two points should be relatively narrow to avoid double computation costs. The blending range should only be sufficiently wide to allow the transition to go unnoticed by the player. In the code below, you can see how each pixel is treated based on the distance from the view camera, and between the range of 400 and 600 units, both code branches are computed.

    float4 lighting = float4(0,0,0,0);
    float4 viewspacePos = mul(IN.WPos, View);
    if ( viewspacePos.z < 600.0f )
    {
    	// work out surface normal per pixel
    	lighting = lit(pow(0.5*(dot(Ln,Nb))+0.5,2),dot(Hn,Nb),24);
    }
    if ( viewspacePos.z > 400.0f )
    {
    	// cheapest directional lighting
    	lighting = lerp ( lighting, cheaplighting, min((viewspacePos.z-400.0f)/200.0f,1.0f) );
    }

    The result is alarmingly good and creates a soft almost unnoticeable transition when rendered. The upshot for the game is that around 90% of the scene is now using the cheap effect and thus accelerating the speed of the game.

    In-Process to Pre-Process

    Having spent a good deal of time on the graphics optimization side, we were still running a few FPS short of our target of 60. The balance of visual quality and achievable performance was struck, but other parts of the game engine beyond the shader system were causing processing overhead sufficient to degrade game speed.

    The game engine already had an internal performance metrics system that crudely measured each major section of the overall game engine pipeline. In addition to the graphics metric, the engine also measures the time taken for A.I, Physics, Weapons, Debugging, and Occlusion among others. One of the metrics monitored the generation of real-time grass, which allows the engine to provide the game with the illusion of infinite grass. Once we had reduced the cost of graphics processing, we noticed that the relative cost of this process jumped up as the next hungriest element in the game engine pipeline. When you optimize, you should always watch out for these spikes in performance and if you determine that they are using an unreasonable amount of game cycles, then a closer examination is warranted. Knowing what is reasonable often comes down to experience and the intimate understanding of the whole engine, and in this case the grass should not be consuming over 10% of the overall game cycles, not with so many other vital services requiring game cycles. On the desktop PC this spike was not obvious, but on the Ultrabook, it was a substantial performance hit. In addition to the metric spike, it was apparent when playing the game that whenever new grass was generated ahead of the player, the frame rate would stutter as the spike interrupted the normally smooth running of the game.


    Figure 10:A field of green – generating grass in real time can be extremely compute intensive

    The solution, and another staple of the optimization coder, was to move the entire grass generation system to a pre-process step that happens before the game even starts. Instead of grass being generated on the fly, it was simply moved into place ahead of the player to create a near identical effect. Nothing needs to be generated, just moved, and the Ultrabook breathed a sigh of relief as precious CPU cycles were freed up for the rest of the game engine. I also sighed with relief as the magic 60 FPS was achieved and the game ran at the desired speed.

    The Mysterious Case of the Strange Stutter

    Having succeeded in achieving ideal gameplay velocity and travelling half way around the world to present the game and engine to the harsh gazes of the GDC attendees, I found that when installing the game on the show devices, a strange stutter effect emerged. The stutter did not exist on the desktop development machines, did not happen on the Ultrabook I used for pre-event testing but was happening on these show devices, and to make things more interesting, they were more powerful than the ones I had tested on.

    After much debate and subsequent research back home, the issue was related to something called “internal timer resolution.” In short, all games that run at a machine-independent speed (that is, the player in your game will take the same amount of time to run from A to B, irrespective of the machine you are running the game on) require access to a GetTime() command. There are several to choose from but one of the most popular is the timeGetTime() command that returns the number of milliseconds that has passed since the machine was switched on. It implies that you will get the result in granularities of 1 millisecond, and indeed many desktop systems report the time at this resolution. It so happens that on Ultrabooks and other portable power-saving devices, this granularity is not fixed and can return a resolution in the 10-15 millisecond range. If you are using this timer to control physics, which was the case with our game engine, the result is a seemingly random and jagged stutter as the physics update calls are sporadically jumping from one reported time to another.

    The reason the granularity can go from 1 ms to 10-15 ms is that some systems can save on-battery power if they step down the processor, and one of the side effects of this is that the frequency of the ticks can get unpredictable. There are a number of solutions, and the one we chose and recommend was to use the QueryPerformanceTimer() command, which guarantees the granularity of the time value returned by offering a second command that returns the frequency the timer operates under.

    5. Tricks and Tips

    Do’s

    • Augment shaders with additional techniques instead of replacing them when optimizing for Ultrabook. Your game still needs to run on desktops as well as Ultrabooks, and the process of distribution is much easier with a single game binary. Both DirectX* and OpenGL* shaders allow you to create techniques within a single shader. With additional techniques in place, your game code can detect the platform you are running on and select the best technique, whether it be for performance or graphical quality.
    • Offer your users an options screen so they can select the level of performance / quality they desire as this is expected by most games players today. It is always a good idea to detect and pre-select the best settings based on their system specification, but it should always be changeable and the default settings you select should always work on the user’s system.

    Don’ts

    • Do not assume you have to run your game at 60 FPS. You can set the monitor refresh interval on most modern devices to skip one or even three vertical sync signals and gain the same smooth non-tearing screen display at 30 FPS. It’s not going to be as smooth as 60 of course, but if your game timings are adjusted, the game will still feel smooth and very playable.
    • Do not underestimate how costly fragment shaders are when developing your game, especially if you are running on low-scoring graphics hardware. If you find your game suffering low performance, switch off or downgrade all shader use as a process of elimination.
    • Do not pre-select a resolution for the user that may not be supported by the display device. Use the Windows* API to interrogate the display device for a compatible default resolution.
    • Do not assume timeGetTime() returns the time in intervals of 1 ms. When Ultrabook power-saving is enabled, it can be as infrequent as 10-15 ms!

    6. A Brief Tour of Ultrabook Gotchas

    It might seem an exercise in the obvious, but here is a quick and handy guide to testing, running, and exhibiting your games and 3D applications on an Ultrabook.

    Power-Saving

    If you are presenting to a large audience and want to show your game in its best light, it is vital you plug in the Ultrabook. Do not run on battery power as the system will protect itself by dialling down all manner of hardware settings that you want to keep on ‘red hot maximum’.


    Figure 11:Power Management on the Ultrabook

    As an extra precaution, find the Power Management settings through the control panel and double check that when using Plugged-In power, all saving settings are off, and that as many settings as possible are set to HIGH.

    Graphics

    The Control Panel has another settings panel that gives you access to your specific device’s graphics accelerator settings. You will find settings that control the GPU and driver when in power-savings mode. You must have this setting set to Performance, or the equivalent mode, to ensure your on-board GPU will run as fast as possible.


    Figure 12: Graphic Acceleration Settings on the Ultrabook™

    It might seem odd that you have to do these things, but the Ultrabook has been designed to conserve power at every turn, allowing you to use the device for hours on end. To achieve maximum performance on the Ultrabook, nothing beats plugging into a wall socket and turning every setting to 11.

    Background Tasks

    Old hands will nod sagely at this simple but crucial piece of advice, which involves a quick scan for any background tasks that may be running on the Ultrabook when Windows starts up. Originally intended as light-weight and helpful background tasks, when combined they have a propensity to slowly task the CPU with all manner of things.

    As vital as some of these are, when you are demonstrating how fast your 3D game can run on an Ultrabook, it is prudent to cancel any tasks that you will not need for that session. Fear not, as they will reappear the next time you boot the Ultrabook, but for the remainder of the Windows session your device will be dedicated to running one application, yours!

    7. Conclusions

    The subject of game optimization is a broad one, and developers should consider the task of optimization part and parcel of their daily duties. The challenge is to enable your game to run on as wide a range of hardware as possible, and it’s at these times that experience and know-how come to the rescue. Using Intel® tools such as the VTune™ analyzer and the Intel Graphics Performance Analyzers accelerate the process of finding the problem. Articles such as this one may give you a few clues as to likely solutions, but it ultimately comes down to your ability to think laterally. How can you do this another way? Is there a faster way to do this? Is there a smarter way to do this? These are great questions to start the process, and the more you ask them, the better you will be at optimizing your games and applications. As I suggested at the start of this article, you will not only become a better coder, you will have expanded your reach into a market that’s growing at an incredible rate!

    Related Content

    Codemasters GRID 2* on 4th Generation Intel® Core™ Processors - Game development case study
    Not built in a day - lessons learned on Total War: ROME II
    Developer's Guide for Intel® Processor Graphics for 4th Generation Intel® Core™ Processors
    PERCEPTUAL COMPUTING: Augmenting the FPS Experience

    About The Author

    When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).  Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

     

    Intel®Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed.  Join our communities for the Internet of Things, Android*, Intel® RealSense™ Technology and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

    Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license.
    Intel, the Intel logo, Ultrabook, and VTune are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2014 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • ultrabook
  • game optimization
  • Shader
  • Shadow Mapping
  • performance optimization
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Intermediate
  • Game Development
  • Laptop
  • URL
  • Intel® Buzz Workshop

    $
    0
    0

    The Intel® Buzz Workshop series is back by popular demand! This series of community workshops is designed to help professional game developers tackle the gaming industry's biggest problems. Intel is serious about game development and graphics on Android and Windows.  We're listening; we want to hear your ideas. Key highlights include: Technical sessions, panels, Developer Showcases, networking, prizes giveaways, and great food!

    Read more from Matt Ployhar: https://software.intel.com/en-us/blogs/2014/06/02/2014-intel-buzz-workshop-series

    Upcoming Workshops:

    • San Francisco, July 16, 111 Minna Gallery
      • Join us for our first community oriented event of the year designed to help professional game developers tackle the industry’s biggest problems in gaming. With a focus on developing games for the mobile market, technical sessions, panels, networking, the opportunity to troubleshoot your creations with our gaming gurus in the Intel Help Lounge, prizes & giveaways, and more.
      • We’ll keep you refueled throughout the day with lunch, dinner, and a happy hour cocktail reception. Just sign up, and we’ll do the rest.
      • Register here: http://www.eventbrite.com/e/intel-buzz-workshop-san-francisco-be-mobile-tickets-11663310275
    • Seattle: August 7 – Stay tuned for more info!
  • gaming
  • Developers
  • Game Development
  • Graphics
  • Touch Interfaces
  • URL
  • 使用 MonoGame* 开发游戏

    $
    0
    0

    作者:Bruno Sonnino

    Download article as PDF

    全球各地的开发人员都希望开发游戏。 为什么不呢? 游戏是计算机历史上销量最高的产品之一,游戏业务带来的财富不断吸引着开发人员的加入。 作为开发人员,我当然希望成为下一个开发愤怒的小鸟* 或光晕*的开发人员。

    但是,事实上,游戏开发是软件开发最困难的领域之一。 你不得不牢记那些从来不会使用的三角函数、几何和物理类。 除此之外,你的游戏必须以吸引用户沉浸其中的方式来组合声音、视频和故事情节。 然后,你需要再编写一行代码!

    为了简化难度,开发游戏使用的框架不仅要能够使用 C 和 C++,还要能够使用 C# 或 JavaScript*(是的,你可以使用 HTML5 和 JavaScript 开发适用于您的浏览器的三维游戏)。

    其中一个框架是 Microsoft XNA*,该框架基于 Microsoft DirectX* 技术,支持为 Xbox 360*、Windows* 和 Windows Phone* 创建游戏。 微软已经初步淘汰了 XNA,但是与此同时,开源社区加入了一位新成员: MonoGame*。

    MonoGame 是什么?

    MonoGame 是 XNA 应用编程接口(API)的开源实施方式。 它不仅能够实施面向 Windows 的 XNA API,还能够实施面向 Mac* OS X*、Apple iOS*、Google Android*、Linux* 和 Windows Phone 的 XNA API。 这意味着,你只需进行较少的改动即可为所有平台开发游戏。 这种特性非常棒:你可以使用能够轻松移植至所有主要台式机、平板电脑和智能手机平台的 C# 来创建游戏。 该框架能够帮助开发人员开发出一款享誉全球的游戏。

    在 Windows 上安装 MonoGame

    甚至,你不需要使用 Windows 便可使用 MonoGame 进行开发。 你可以使用 MonoDevelop* (面向 Microsoft .NET 语言的开源跨平台集成开发环境 [IDE])或 Xamarin 开发的一款跨平台 IDE — Xamarin Studio*。 借助这些 IDE,你可以使用 C# 在 Linux 或 Mac 上进行开发。

    如果你是一位 Microsoft .NET 开发人员,并且日常使用的工具是 Microsoft Visual Studio*,你可以像我一样将 MonoGame 安装到 Visual Studio 中并且用它来创建游戏。 在撰写本文时,MonoGame 的最新稳定版本是 3.2 版。该版本可在 Visual Studio 2012 和 2013 中运行,并支持创建支持触摸功能的 DirectX 桌面游戏。

    MonoGame 安装在 Visual Studio 中随附了许多新模板,你可从中选择来创建游戏,如图 1 所示。

    图 1.全新 MonoGame* 模板

    现在,如要创建第一个游戏,请点击 MonoGame Windows Project,然后选择一个名称。 Visual Studio 可创建一个包括所有所需文件和参考的新项目。 如果运行该项目,则应如图 2 所示。

    图 2.在 MonoGame* 模板中创建的游戏

    很无聊,是吗? 只有一个蓝色屏幕;但是,构建任何游戏都要从它开始。 按 Esc,则可关闭窗口。

    现在,你可以使用目前拥有的项目开始编写游戏,但是有一个问题: 如要添加任何资产(图像、子图、声音或字体),你需要将其编写为与 MonoGame 兼容的格式。 对于这一点,你需要以下选项之一:

    • 安装 XNA 游戏 Studio 4.0
    • 安装 Windows Phone 8 软件开发套件(SDK)
    • 使用外部程序,如 XNA 内容编译器

    XNA Game Studio

    XNA Game Studio 可提供为 Windows 和 Xbox 360 创建 XNA 游戏所需的一切组件。 此外,它还包括内容编译器,可将资产编译至 .xnb 文件,然后编译 MonoGame 项目所需的一切文件。 目前,仅可在 Visual Studio 2010 中安装编译器。 如果你不希望仅出于该原因来安装 Visual Studio 2010,则可在 Visual Studio 2012 中安装 XNA Game Studio(详见本文“了解更多信息”部分的链接)。

    Windows Phone 8 SDK

    你可以在 Visual Studio 2012 中直接安装 XNA Game Studio,但是在 Visual Studio 2012 中安装 Windows Phone 8 SDK 更好。 你可以用它创建项目来编译资产。

    XNA 内容编译器

    如果不希望安装 SDK 来编译资产,则可使用 XNA 内容编译器(详见“了解更多信息”中的链接),该编译器是一款开源程序,能够将资产编译至 MonoGame 中可使用的 .xnb 文件。

    创建第一个游戏

    使用 MonoGame 模板创建的上一个游戏可作为所有游戏的起点。 你可以使用相同的流程创建所有游戏。 Program.cs 中包括 Main 函数。 该函数可初始化和运行游戏:

    static void Main()
    {
        using (var game = new Game1())
            game.Run();
    }

    Game1.cs是游戏的核心。 有两种方法需要在一个循环中每秒钟调用 60 次: 更新和绘制。 在更新中,为游戏中的所有元素重新计算数据;在绘制中,绘制这些元素。 请注意,这是一个紧凑的循环。 你只有 1/60 秒,也就是 16.7 毫秒来计算和绘制数据。 如果你超出该事件,程序就会跳过一些绘制循环,游戏中就会出现图形故障。

    近来,台式电脑上的游戏输入方式是键盘和鼠标。 除非用户购买了外部硬件,如驱动轮和操纵杆,否则我们只能假定没有其他的输入方法。 随着新硬件的推出,如超极本™ 设备、 2 合 1 超极本和一体机, 输入选项发生了变化。 你可以使用触摸输入和传感器,为用户提供更加沉浸式、逼真的游戏体验。

    对于第一款游戏,我们将创建足球点球赛。 用户使用触摸的方式来“射门”,计算机守门员接球。 球的方向和速度由用户的敲击动作来决定。 计算机守门员将会随机选择一个方向和速度接球。 射门成功得一分。 反之,守门员的一分。

    向游戏添加内容

    游戏中的第一步是添加内容。 通过添加背景场地和足球开始。 如要执行该操作,则需要创建两个 .png 文件:一个文件用于足球场(图 3),另一个用于足球(图 4)。

     

    图 3.足球场

     

     

    图 4.足球

    如要在游戏中使用这些文件,你需要对其进行编译。 如果正在使用 XNA Game Studio 或 Windows Phone 8 SDK,则需要创建一个 XNA 内容项目。 该项目不需要在同一个解决方案中。 你只需要用它来编译资产。 将图像添加至该项目并对其进行构建。 然后,访问项目目标目录,并将生成的 .xnb 文件复制至你的项目。

    我更喜欢使用 XNA 内容编译器,它不需要新项目且支持按需编译资产。 仅需打开程序,将文件添加至列表,选择输出目录,并点击“编译(Compile)”。 .xnb 文件便可添加至该项目。

    内容文件

    .xnb 文件可用时,将其添加至游戏的 “内容( Content)” 文件夹下。 你必须为每个文件,包括“内容(Content)”“复制至输入目录(Copy to Output Directory)”以及“如果较新则复制(Copy if Newer)”,设置构建操作。 如果不执行该操作,则会在加载资产时出现错误。

    创建两个字段存储足球和足球场的纹理:

    private Texture2D _backgroundTexture;
    private Texture2D _ballTexture;

    这些字段可在 LoadContent 方法中加载:

    protected override void LoadContent()
    {
        // Create a new SpriteBatch, which can be used to draw textures.
        _spriteBatch = new SpriteBatch(GraphicsDevice);
    
        // TODO: use this.Content to load your game content here
        _backgroundTexture = Content.Load<Texture2D>("SoccerField");
        _ballTexture = Content.Load<Texture2D>("SoccerBall");
    }

    请注意,纹理的名称与内容(Content )文件夹中的文件名称相同,但是没有扩展名。

    接下来,在 Draw 方法中绘制纹理:

    protected override void Draw(GameTime gameTime)
    {
        GraphicsDevice.Clear(Color.Green);
    
        // Set the position for the background
        var screenWidth = Window.ClientBounds.Width;
        var screenHeight = Window.ClientBounds.Height;
        var rectangle = new Rectangle(0, 0, screenWidth, screenHeight);
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
        // Draw the ball
        var initialBallPositionX = screenWidth / 2;
        var ínitialBallPositionY = (int)(screenHeight * 0.8);
        var ballDimension = (screenWidth > screenHeight) ?
            (int)(screenWidth * 0.02) :
            (int)(screenHeight * 0.035);
        var ballRectangle = new Rectangle(initialBallPositionX, ínitialBallPositionY,
            ballDimension, ballDimension);
        _spriteBatch.Draw(_ballTexture, ballRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    

    这种方法是用绿色清屏,然后绘制背景并绘制罚球点的足球。 第一种方法 spriteBatch Draw可绘制能够调整为窗口尺寸的背景,位置 0,0;第二种方法可绘制罚球点的足球。 它可调整为窗口大小的比例。 此处没有运动,因为位置不改变。 接下来是移动足球。

    移动足球

    如要移动足球,我们必须重新计算循环中每个迭代的位置,并在新的位置绘制它。 在 Update方法中执行新位置的计算:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition -= 3;
        _ballRectangle.Y = _ballPosition;
        base.Update(gameTime);
    
    }
    
    

    足球位置在每个循环中都会通过减去三个像素进行更新。 如果你希望让球移动地更快,则必须减去更多的像素。 变量 _screenWidth_screenHeight_backgroundRectangle_ballRectangle_ballPosition是私有字段,可在 ResetWindowSize方法中进行初始化:

    private void ResetWindowSize()
    {
        _screenWidth = Window.ClientBounds.Width;
        _screenHeight = Window.ClientBounds.Height;
        _backgroundRectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
        _initialBallPosition = new Vector2(_screenWidth / 2.0f, _screenHeight * 0.8f);
        var ballDimension = (_screenWidth > _screenHeight) ?
            (int)(_screenWidth * 0.02) :
            (int)(_screenHeight * 0.035);
        _ballPosition = (int)_initialBallPosition.Y;
        _ballRectangle = new Rectangle((int)_initialBallPosition.X, (int)_initialBallPosition.Y,
            ballDimension, ballDimension);
    }

    该方法可根据窗口的尺寸重置所有变量。 它可在 Initialize方法中调用:

    protected override void Initialize()
    {
        // TODO: Add your initialization logic here
        ResetWindowSize();
        Window.ClientSizeChanged += (s, e) => ResetWindowSize();
        base.Initialize();
    }

    这种方法在两个不同的位置调用:流程的开始以及每次窗口发生改变时。 Initialize可处理 ClientSizeChanged,因此当窗口尺寸发生改变时,与窗口尺寸相关的变量将进行重新评估,足球将重新摆放至罚球点。

    如果运行程序,你将看到足球呈直线移动,直至字段结束时停止。 当足球到达目标时,你可以使用以下代码将足球复位:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition -= 3;
        if (_ballPosition < _goalLinePosition)
            _ballPosition = (int)_initialBallPosition.Y;
    
        _ballRectangle.Y = _ballPosition;
        base.Update(gameTime);
    
    }
    
    

    The _goalLinePosition variable is another field, initialized in the ResetWindowSize method:

    _goalLinePosition = _screenHeight * 0.05;

    你必须在 Draw方法中做出另一个改变:移除所有计算代码。

    protected override void Draw(GameTime gameTime)
    {
        GraphicsDevice.Clear(Color.Green);
    
       var rectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
        // Draw the ball
    
        _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    该运动与目标呈垂直角度。 如果你希望足球呈一定的角度移动,则需要创建 _ballPositionX字段,并增加(向右移动)或减少(向左移动)它。 更好的方法是将 Vector2用于足球位置,如下:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        _ballPosition.X -= 0.5f;
        _ballPosition.Y -= 3;
        if (_ballPosition.Y < _goalLinePosition)
            _ballPosition = new Vector2(_initialBallPosition.X,_initialBallPosition.Y);
        _ballRectangle.X = (int)_ballPosition.X;
        _ballRectangle.Y = (int)_ballPosition.Y;
        base.Update(gameTime);
    
    }
    
    

    如果运行该程序,将会显示足球以一个角度运行(图 5)。 接下来是让球在用户点击它时运动。

    图 5.带有足球移动的游戏

    触摸和手势

    在该游戏中,足球的运动必须以触摸轻拂开始。 该轻拂操作决定了足球的方向和速度。

    在 MonoGame 中,你可以使用 TouchScreen类获得触摸输入。 你可以使用原始输入数据或 Gestures API。 原始输入数据更灵活,因为你可以按照希望的方式处理所有输入;Gestures API 可将该原始数据转换为过滤的手势,以便只接受你希望接收的手势输入。

    虽然 Gestures API 更易于使用,但是有几种情况不能使用这种方法。 例如,如果你希望检测特殊手势,如 X 型手势或多手指手势,则需要使用原始数据。

    对于该游戏,我们仅需要轻拂操作,Gestures API 支持该操作,所以我们使用它。 首先需要通过使用 TouchPanel 类指明希望使用的手势。 例如,代码:

    TouchPanel.EnabledGestures = GestureType.Flick | GestureType.FreeDrag;

    . . . 仅支持 MonoGame 检测并通知轻拂和拖动操作。 然后,在 Update方法中,你可以按照如下方式处理手势:

    if (TouchPanel.IsGestureAvailable)
    {
        // Read the next gesture
        GestureSample gesture = TouchPanel.ReadGesture();
        if (gesture.GestureType == GestureType.Flick)
        {…
        }
    }
    
    

    首先,确定是否有可用手势。 如果有,则可以调用 ReadGesture获取并处理它。

    使用触摸对运动执行 Initiate 操作

    首先,使用 Initialize 方法在游戏中启用轻拂手势:

    protected override void Initialize()
    {
        // TODO: Add your initialization logic here
        ResetWindowSize();
        Window.ClientSizeChanged += (s, e) => ResetWindowSize();
        TouchPanel.EnabledGestures = GestureType.Flick;
        base.Initialize();
    }

    此时,足球在游戏运行时将会一直运动。 使用私有字段 _isBallMoving可在足球移动时通知游戏。 在 Update 方法中,当程序检测轻拂操作时,你将 _isBallMoving设置为 True,则足球将开始运动。 当足球到达球门线时,将 _isBallMoving设置为 False 并重置足球的位置:

    protected override void Update(GameTime gameTime)
    {
        if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
            Keyboard.GetState().IsKeyDown(Keys.Escape))
            Exit();
    
        // TODO: Add your update logic here
        if (!_isBallMoving && TouchPanel.IsGestureAvailable)
        {
            // Read the next gesture
            GestureSample gesture = TouchPanel.ReadGesture();
            if (gesture.GestureType == GestureType.Flick)
            {
                _isBallMoving = true;
                _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
            }
        }
        if (_isBallMoving)
        {
            _ballPosition += _ballVelocity;
            // reached goal line
            if (_ballPosition.Y < _goalLinePosition)
            {
                _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
                _isBallMoving = false;
                while (TouchPanel.IsGestureAvailable)
                    TouchPanel.ReadGesture();
            }
            _ballRectangle.X = (int) _ballPosition.X;
            _ballRectangle.Y = (int) _ballPosition.Y;
        }
        base.Update(gameTime);
    
    }
    
    

    不再保持足球增量:程序使用 _ballVelocity字段从 x 和 y 方向上设置足球速度。 Gesture.Delta可返回上一次更新之后的运动变量。 如要计算轻拂操作的速度,请将该矢量与 TargetElapsedTime属性相乘。

    如果足球正在移动,_ballPosition矢量将按照速度(每帧的像素数)增加直至足球到达球门线。 以下代码:

    _isBallMoving = false;
    while (TouchPanel.IsGestureAvailable)
        TouchPanel.ReadGesture();

    . . .可以执行两个操作:它可以让足球停止,也可以移除输入队列的所有手势。 如果你不执行该操作,则用户能够在足球移动时进行轻拂操作,这将会使足球在停止之后再次移动。

    当运行该游戏时,你可以轻拂足球,它能够以你轻拂的速度和方向进行移动。 但是,此处有一个问题。 代码无法检测到轻拂操作出现的位置。 你可以轻拂屏幕的任何位置(不仅是足球内部),然后足球将开始移动。 你可以使用gesture.Position检测轻拂的姿势,但是该属性将会一直返回 0,0,因此便无法使用该方法。

    解决这一问题的方法是使用原始输入,获取触摸点,然后了解其是否在足球附近。 以下代码能够决定触摸输入是否可以触发足球。 如果可以,手势将设置 _isBallHit field

    TouchCollection touches = TouchPanel.GetState();

    TouchCollection touches = TouchPanel.GetState();
    
    if (touches.Count > 0 && touches[0].State == TouchLocationState.Pressed)
    {
        var touchPoint = new Point((int)touches[0].Position.X, (int)touches[0].Position.Y);
        var hitRectangle = new Rectangle((int)_ballPositionX, (int)_ballPositionY, _ballTexture.Width,
            _ballTexture.Height);
        hitRectangle.Inflate(20,20);
        _isBallHit = hitRectangle.Contains(touchPoint);
    }

    然后,运动仅在 _isBallHit字段为 True 时开始:

    if (TouchPanel.IsGestureAvailable && _isBallHit)

    如果运行游戏,你将仅可在轻拂操作启动足球时移动它。 但是,此处仍然存在一个问题:如果点击球的速度太慢或以其无法击中球门线的位置点击,则游戏将会结束,因为足球不会返回起始点。 必须为足球移动设置一个超时。 当到达超时时,游戏便会将足球复位。

    Update 方法有一个参数: gameTime。 如果在移动开始时存储了 gameTime值,则可知道足球移动的实际时间,并可在超时后重置游戏:

    if (gesture.GestureType == GestureType.Flick)
    {
        _isBallMoving = true;
        _isBallHit = false;
        _startMovement = gameTime.TotalGameTime;
        _ballVelocity = gesture.Delta*(float) TargetElapsedTime.TotalSeconds/5.0f;
    }
    
    ...
    
    var timeInMovement = (gameTime.TotalGameTime - _startMovement).TotalSeconds;
    // reached goal line or timeout
    if (_ballPosition.Y <' _goalLinePosition || timeInMovement > 5.0)
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _isBallMoving = false;
        _isBallHit = false;
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    添加守门员

    游戏现在可以运行了,但是它还需要一个制造难度的元素:你必须添加一个守门员,在用户踢出足球后一直运动。 守门员是 XNA 内容编译器编译的 .png 文件(图 6)。 我们必须将该编译文件添加至 Content 文件夹,为 Content 设置构建操作,并将“复制至输出目录 (Copy to Output Directory)”设置为“如果较新则复制(Copy if Newer)”。

    图 6.守门员

    守门员在 LoadContent方法中加载:

    protected override void LoadContent()
    {
        // Create a new SpriteBatch, which can be used to draw textures.
        _spriteBatch = new SpriteBatch(GraphicsDevice);
    
        // TODO: use this.Content to load your game content here
        _backgroundTexture = Content.Load<Texture2D>("SoccerField");
        _ballTexture = Content.Load<Texture2D>("SoccerBall");
        _goalkeeperTexture = Content.Load<Texture2D>("Goalkeeper");
    }

    然后,我们必须在 Draw方法中绘制它:

    protected override void Draw(GameTime gameTime)
    {
    
        GraphicsDevice.Clear(Color.Green);
    
        // Begin a sprite batch
        _spriteBatch.Begin();
        // Draw the background
        _spriteBatch.Draw(_backgroundTexture, _backgroundRectangle, Color.White);
        // Draw the ball
        _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
        // Draw the goalkeeper
        _spriteBatch.Draw(_goalkeeperTexture, _goalkeeperRectangle, Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    _goalkeeperRectangle 在窗口中可提供一个矩形的守门员。 它可在 Update 方法中更改:

    protected override void Update(GameTime gameTime)
    {…
    
       _ballRectangle.X = (int) _ballPosition.X;
       _ballRectangle.Y = (int) _ballPosition.Y;
       _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
                        _goalKeeperWidth, _goalKeeperHeight);
       base.Update(gameTime);
    }
    
    

    _goalkeeperPositionY、_goalKeeperWidth_goalKeeperHeight字段可在 ResetWindowSize方法中更新:

    private void ResetWindowSize()
    {…
        _goalkeeperPositionY = (int) (_screenHeight*0.12);
        _goalKeeperWidth = (int)(_screenWidth * 0.05);
        _goalKeeperHeight = (int)(_screenWidth * 0.005);
    }
    
    

    守门员最初位于屏幕中央的球门线顶端附近。

    _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth)/2;

    守门员将会在足球开始移动时开始移动。 它将会不停地以谐运动的方式从一端移动至另一端。 该正弦曲线可描述该运动:

    X = A * sin(at + δ)

    其中,A是运动幅度(目标宽度),t是运动时间, aδ是随机系数(这将会使运动具备一定的随机性,因此用户将无法预测守门员的速度和方向)。

    该系数将会在用户通过轻拂踢出足球时进行计算:

    if (gesture.GestureType == GestureType.Flick)
    {
        _isBallMoving = true;
        _isBallHit = false;
        _startMovement = gameTime.TotalGameTime;
        _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
        var rnd = new Random();
        _aCoef = rnd.NextDouble() * 0.005;
        _deltaCoef = rnd.NextDouble() * Math.PI / 2;
    }

    系数 a是守门员的速度,0 和 0.005 之间的数字代表 0 和 0.3 像素/秒之间的速度(1/60 秒内最大像素为 0.005)。 delta 系数是必须是介于 0 和 pi/2 之间的数字。 足球移动时,你可以更新守门员的位置:

    if (_isBallMoving)
    {
        _ballPositionX += _ballVelocity.X;
        _ballPositionY += _ballVelocity.Y;
        _goalkeeperPositionX = (int)((_screenWidth * 0.11) *
                          Math.Sin(_aCoef * gameTime.TotalGameTime.TotalMilliseconds +
                          _deltaCoef) + (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11);…
    }
    

    运动的幅度是 _screenWidth * 0.11(目标尺寸)。 将(_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11 添加至结果,以便守门员移动至目标前方。 现在,开始构建让守门员接住球。

    命中测试

    如果希望了解守门员是否能够接住球,你需要知道球的矩形是否与守门员的矩形相交。 我们可以按照以下代码计算两个矩形后,在 Update方法中执行该操作:

    _ballRectangle.X = (int)_ballPosition.X;
    _ballRectangle.Y = (int)_ballPosition.Y;
    _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
        _goalKeeperWidth, _goalKeeperHeight);
    if (_goalkeeperRectangle.Intersects(_ballRectangle))
    {
        ResetGame();
    }

    ResetGame 仅可重构代码,将游戏重置为初始状态:

    private void ResetGame()
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
        _isBallMoving = false;
        _isBallHit = false;
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    借助该简单代码,游戏便可知道守门员是否能够接住球。 现在,我们需要知道足球是否能够命中。 当足球超过球门线时,执行以下代码。

    var isTimeout = timeInMovement > 5.0;
    if (_ballPosition.Y < _goalLinePosition || isTimeout)
    {
        bool isGoal = !isTimeout &&
            (_ballPosition.X > _screenWidth * 0.375) &&
            (_ballPosition.X < _screenWidth * 0.623);
        ResetGame();
    }

    足球必须完全在目标中,因此,其位置必须在第一个球门柱之后(_screenWidth * 0.375)开始,并在第二个球门柱之前(_screenWidth * 0.625 − _screenWidth * 0.02)结束。 现在,我们开始更新游戏分数。

    添加分数记录(Scorekeeping)

    如要向游戏中添加游戏记录,我们必须添加一个新资产:spritefont,其字体可用于游戏。 spritefont是描述字体的 .xml 文件,包括字体家族及其尺寸和重量及其他属性。 在游戏中,你可以按照以下方式使用 spritefont:

    <?xml version="1.0" encoding="utf-8"?><XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"><Asset Type="Graphics:FontDescription"><FontName>Segoe UI</FontName><Size>24</Size><Spacing>0</Spacing><UseKerning>false</UseKerning><Style>Regular</Style><CharacterRegions><CharacterRegion><Start> </Star><End></End></CharacterRegion></CharacterRegions></Asset></XnaContent>
    你可以使用 XNA 内容编译器来编译该 .xml 文件,并将生成的 .xnb 文件添加至项目的 Content 文件夹;将其构建操作设置至 Content,并将“复制至输出目录(Copy to Output Directory)”设置为“如果较新则复制(Copy if Newer)”。 字体可在 LoadContent方法中加载:
    _soccerFont = Content.Load<SpriteFont>("SoccerFont");

    ResetWindowSize中,重置得分情况:

    var scoreSize = _soccerFont.MeasureString(_scoreText);
    _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);

    如要保持记录,需要声明两个变量: _userScore_computerScore。 命中时,_userScore变量增加,未命中、超时或守门员接住球时,_computerScore增加:

    if (_ballPosition.Y < _goalLinePosition || isTimeout)
    {
        bool isGoal = !isTimeout &&
                      (_ballPosition.X > _screenWidth * 0.375) &&
                      (_ballPosition.X < _screenWidth * 0.623);
        if (isGoal)
            _userScore++;
        else
            _computerScore++;
        ResetGame();
    }
    …
    if (_goalkeeperRectangle.Intersects(_ballRectangle))
    {
        _computerScore++;
        ResetGame();
    }
    
    

    ResetGame 可重新创建得分文本,并设置其情况:

    private void ResetGame()
    {
        _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
        _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
        _isBallMoving = false;
        _isBallHit = false;
        _scoreText = string.Format("{0} x {1}", _userScore, _computerScore);
        var scoreSize = _soccerFont.MeasureString(_scoreText);
        _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);
        while (TouchPanel.IsGestureAvailable)
            TouchPanel.ReadGesture();
    }

    _soccerFont.MeasureString 可使用选中字体测量字符串,你可以使用该测量方式来计算得分情况。 得分可在 Draw 方法中进行绘制:

    protected override void Draw(GameTime gameTime)
    {
    …
        // Draw the score
        _spriteBatch.DrawString(_soccerFont, _scoreText,
             new Vector2(_scorePosition, _screenHeight * 0.9f), Color.White);
        // End the sprite batch
        _spriteBatch.End();
        base.Draw(gameTime);
    }
    
    

    打开球场灯光

    作为最后一个触摸设计,该款游戏可在室内光线较暗时打开球场灯光。 全新超极本和 2 合 1 设备通常具备一个光线传感器,你可以用它来确定室内光线的程度并更改背景的绘制方式。

    对于台式机应用,我们可以使用面向 Microsoft .NET Framework 的 Windows API Code Pack,它是一款支持访问 Windows 7 及更高版本操作系统特性的库。 但是,在该游戏中,我们采用了另一种方式:WinRT Sensor API。 这些 API 虽然面向 Windows 8 而编写,但是同样适用于台式机应用,且不经任何更改即可使用。 借助它们,你无需更改任何代码即可将应用移植到 Windows 8。

    英特尔® 开发人员专区(IDZ)包括一篇如何在台式机应用中使用 WinRT API 的文章(详见“了解更多信息”部分)。 基于该信息,你必须在 Solution Explorer 中选择该项目,右击它,然后点击 Unload Project。 然后,再次右击该项目,并点击 Edit project。 在第一个 PropertyGroup中添加 TargetPlatFormVersion标签:

    <PropertyGroup><Configuration Condition="'$(Configuration)' == ''">Debug</Configuration>
    …
      <FileAlignment>512</FileAlignmen><TargetPlatformVersion>8.0</TargetPlatformVersion></PropertyGroup>

    再次右击项目,然后点击Reload Project。 Visual Studio 将重新加载该项目。 当向项目中添加新标签时,将能够在 Reference Manager 中看到 Windows标签,如图 7 所示。

    图 7.Reference Manager 中的 Windows* 标签

    向项目中添加 Windows 参考。 此外,你还需要添加 System.Runtime.WindowsRuntime.dll参考。 如在汇编程序列表中看不到,则可浏览 .Net Assemblies文件夹。 在我的设备上,路径为 C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETCore\v4.5

    现在,你可以开始编写代码来检测灯光传感器:

    LightSensor light = LightSensor.GetDefault();
    if (light != null)
    {

    如果有灯光传感器,GetDefault方法可返回一个非空变量,以便用来检查灯光变化。 通过编写 ReadingChanged事件来执行该操作,如下:

    LightSensor light = LightSensor.GetDefault();
    if (light != null)
    {
        light.ReportInterval = 0;
        light.ReadingChanged += (s,e) => _lightsOn = e.Reading.IlluminanceInLux < 10;
    }

    如果读取的值小于 10,则变量 _lightsOn为真,你可以用它以不同的方式来绘制背景。 如果你看到 spriteBatchDraw方法,将会发现第三个参数为颜色。 到目前为止,你只使用过白色。 该颜色用于为位图着色。 如果你使用白色,则位图中的颜色将保持不变;如果你使用黑色,则位图将会全部变为黑色。 你可以使用任何颜色为位图着色。 你可以使用颜色来打开灯光,当灯光关闭时使用绿色,开启时使用白色。 在 Draw方法中,更改背景的绘制:

    _spriteBatch.Draw(_backgroundTexture, rectangle, _lightsOn ? Color.White : Color.Green);

    现在,当你运行程序时,当灯光关闭时你将会看到深绿色背景,当灯光开启时将会看到浅绿色背景(图 8)。

    图 8.完整游戏

    现在你拥有了一款完整的游戏。 但是,它尚且未完成,它还需要大量改进(命中时的动画,守门员接住球或球击中球门柱时的反弹画面),但是我把它作为家庭作业留给你。 最后一步是将游戏移植到 Windows 8。

    将游戏移植至 Windows 8。

    将 MonoGame 游戏移植至其他平台非常简单。 你只需要在 MonoGame Windows Store Project类型的解决方案中创建一个新项目,然后删除 Game1.cs文件并将 Windows Desktop 应用 Content文件夹中的四个 .xnb 文件添加至新项目的 Content 文件夹。 你无需向源文件中添加新文件,只需添加链接。 在 Solution Explorer 中,右击 Content 文件夹,点击 “添加/现有文件(Add/Existing Files)”,在 Desktop 项目中选择四个 .xnb 文件,点击“添加(Add)”按钮旁边的下箭头,并选择“添加为链接(Add as link)”。 Visual Studio 可添加四个链接。

    然后,将 Game1.cs文件从以前的项目添加至新项目。 重复对 .xnb 文件所执行的流程:右击项目,点击“添加/现有文件(Add/Existing Files)”,从其他项目文件夹中选择 Game1.cs 文件,点击“添加(Add)”按钮旁边的下箭头,然后点击“添加为链接(Add as link)”。 最后需要改动的地方是 Program.cs,你需要对 Game1类的命名空间进行更改,因为你现在使用的是台式机项目中的 Game1类。

    完成 — 你创建了一款适用于 Windows 8 的游戏!

    结论

    游戏开发本身是一项困难重重的任务。 你需要记住三角、几何和物理类,并运用这些概念来开发游戏(如果教授者在教授这些课题时使用的是游戏,会不会很棒?)

    MonoGame 让该任务更简单。 你无需处理 DirectX,可以使用 C# 来开发游戏,并且能够完全访问硬件。 你可以在游戏中使用触摸、声音和传感器。 此外,你还可以开发一款游戏,对其进行较小的修改并将其移植至 Windows 8、Windows Phone、Mac OS X、iOS 或 Android。 当你希望开发多平台游戏时,这是一个巨大的优势。

     

    了解更多信息

    关于作者

    Bruno Sonnino 是巴西的微软最有价值专家(MVP)。他是一位开发人员、咨询师兼作家,曾编写过五本有关 Delphi 的书籍,并由 Pearson Education Brazil 以葡萄牙语出版,此外,他还在巴西和美国的杂志和网站上发表过多篇文章。

  • Monogame
  • XNA
  • Developers
  • Microsoft Windows* 8
  • Windows*
  • Beginner
  • Game Development
  • Laptop
  • URL

  • which kind of game that we must do

    $
    0
    0

    under which category that we must do this game....?

    Optimizing an Augmented Reality Pipeline using Intel® IPP Asynchronous

    $
    0
    0

    Using Intel® GPUs to Optimize the Performance and Power Consumption of Total Immersion's D'Fusion* Augmented Reality Pipeline

    Michael Jeronimo, Intel (michael.jeronimo@intel.com)
    Pascal Mobuchon, Total Immersion (pascal.mobuchon@t-immersion.com)

    Executive Summary

    This case study details the optimization of Total Immersion's D'Fusion* Augmented Reality pipeline, using the Intel® Integrated Performance Primitives (Intel® IPP) Asynchronous to execute key parts of the pipeline on the GPU. The paper explains the Total Immersion pipeline, the goals and strategy for the optimization, the results achieved, and the lessons learned.

    Intel IPP Asynchronous

    The Intel IPP Asynchronous (Intel IPP-A) library—available for Windows* 7, Windows 8, Linux*, and Android*—is a companion to the traditional CPU-based Intel IPP library. This library extends the successful Intel IPP acceleration library model to the GPU, providing a set of GPU-accelerated primitive functions that can be used to build visual computing algorithms. Intel IPP-A is a simple host-callable C API consisting of a set of functions that operate on matrix data, the basic data type used to represent image and video data. The functions provided by Intel IPP-A are low-, medium-, and high-level building blocks for video analysis algorithms. The library includes low-level functions such as basic math and Boolean logic operations; mid-level functions like filtering operations, morphological operations, edge detection algorithms; and high level functions including HAAR classification, optical flow, and Harris and Fast9 feature detection.

    When a client application calls a function in the Intel IPP-A API, the library loads and executes the corresponding GPU kernel. The application does not explicitly manage GPU kernels; at application run time the library loads the correct highly optimized kernels for the specific processor. The Intel IPP-A library supports third generation Intel® Core™ processors (code named Ivy Bridge) and higher, and Intel® Atom™ processors, like the Bay Trail SoC, that include Intel® Processor Graphics. Allowing the library implementation to manage kernel selection, loading, dispatch, and synchronization simplifies the task of using the GPU for visual computing functionality. The Intel IPP-A library also includes a CPU-optimized implementation for fallback on legacy systems or application-level CPU/GPU balancing.

    Like the traditional CPU-based Intel IPP library, when code is implemented using the Intel IPP-A API, the code does not need to be updated to take advantage of the additional resources provided by future Intel processors. For example, when a processor providing additional GPU execution units (EUs) is released, the existing Intel IPP-A kernels can automatically scale performance, taking advantage of the additional EUs. Or, if a future Intel processor provides new hardware acceleration blocks for video analysis operations, a new Intel IPP-A library implementation will use the accelerators while keeping the Intel IPP-A interface constant. Developers can simply recompile and relink with the new library implementation. Intel IPP-A provides a convenient abstraction layer for GPU-based visual computing that provides automatic performance scaling across processor generations.

    It is easy to integrate Intel IPP-A code with the existing CPU-based code, so developers can take an incremental approach to optimization. They can identify key pixel processing hotspots and target those for offload to the GPU. But they must take care when offloading to the GPU so as not to introduce data transfer overhead. Instead, developers should create an algorithm pipeline that allows significant work to be performed on the GPU before the results are required by the CPU code, minimizing inter-processor data transfer.

    Benefits of GPU Offload

    Offloading time consuming pixel processing operations to the GPU can result in significant power and performance benefits. In particular, the GPU:

    • Has a lower operating frequency– the GPU runs at a lower clock frequency than the CPU, consuming less power for the same computation.
    • Has more hardware threads– the GPU has significantly more hardware threads, providing better performance for operations where performance scales with an increasing number of threads, such as the visual processing operations in Intel IPP-A.
    • Has the potential to run more complex algorithms– due to the better power and performance provided by the GPU, developers can use more computationally intensive algorithms to achieve improved results and/or process more pixels than they could otherwise using the CPU only.
    • Can free the CPU for other tasks – by moving processing to the GPU, developers can reduce CPU utilization, freeing up the CPU processing resources for other tasks.

    The benefits offered by Intel IPP-A programming on the GPU can be applied in a variety of market segments to help ISVs reach specific goals. For example, in Digital Security and Surveillance (DSS), the primary metric is the number of channels of input video that a platform can process (the "channel density"), while in Augmented Reality, decreasing the time to acquire targets to track and increasing the number of objects that can be simultaneously tracked are key.

    Augmented Reality

    Augmented Reality (AR) enhances a user's perception with computer-generated input such as sound, video, or graphics data. AR merges the real world with computer-generated elements, either meta information or virtual objects, resulting in a composite that presents more information and capabilities than an un-augmented experience. AR applications usually overlay information about the environment and objects on a real-time video stream, making the virtual objects interactive. AR technology can be applied to many market segments including retail, medicine, entertainment, and education. For example:

    • Mobile augmented reality systems combine a mobile platform's camera, GPS, and compass sensors with its Internet connectivity to pinpoint the user's location, detect device orientation, and provide information about the scene, overlaying content on the screen.
    • Virtual dressing rooms allow customers to virtually try on clothes, shoes, jewelry, or watches, either in-store or at home, automatically sizing the item to the user in a 3D view on the device.
    • Construction managers can view and monitor work in progress, in real time, through Augmented Reality markers placed throughout a site.

    Total Immersion

    Total Immersion is an augmented reality company, founded in 1998, based in Suresnes, France. Through its patented D'Fusion software solution, Total Immersion combines the virtual world and the real world by integrating real-time interactive 3D graphics into a live video stream. The company maintains offices in Europe, North America, and Asia and supports the world's largest augmented reality partner network, with over 130 solution providers.

    Today, mobile technology is everywhere. Total Immersion (TI) is developing compelling AR experiences for tablets and phones. Intel, recognizing Total Immersion as a leader in Augmented Reality, initiated a collaboration with TI to optimize the D'Fusion software for Intel processors, including GPU offloading. They aimed to improve the AR experience when running on Intel products that power mobile platforms, such as the Intel Atom SoC Z3680.

    Optimization Goals and Strategy

    Augmented Reality applications rely on computer vision algorithms to detect, recognize, and track objects in input video streams. While a large part of the AR processing doesn't deal directly with pixels, the pixel processing required is a computationally intensive, data parallel task appropriate for GPU offload. Intel and Total Immersion planned to offload the pixel processing to the GPU, using Intel IPP-A, so that the pipeline handled the pixel processing—from capture to rendering—and only the metadata about the pixel information would be returned to the CPU as input for higher-level AR operations. By offloading all of the pixel processing to the GPU, the application achieved better performance with less power consumption, making D'Fusion-based applications run efficiently on mobile platforms while conserving battery life.

    The D'Fusion AR Pipeline

    The core of the D'Fusion software is a processing pipeline that consists of the following stages:

    The D'Fusion AR Pipeline
    Figure 1 – The Design of the PixelFlow Framework

    • Capture – The first step in the pipeline is capturing input video from the camera. The video can be captured in a variety of formats, such as RGB24, NV12, or YUY2, depending on the specific camera. Frames are captured at the full frame rate, typically 30 FPS, and passed to the next stage in the pipeline. Each captured frame has an associated time stamp that specifies the precise time of capture.
    • Preparation – Computer vision algorithms usually operate on grayscale images, and the TI AR pipeline is no exception. The first step after Capture is to convert the color format of the captured image to grayscale. Next, because computer vision algorithms often do not require the full frame size to operate effectively, input frames can be downscaled to a lower resolution. The reduced number of pixels to process saves computational resources. Then, depending on the orientation of the image, mirroring may also be required. Finally, in addition to the grayscale image required by the computer vision processing, a color image must also be sent down the pipeline so that the scene can eventually be rendered along with the AR-generated information. It is also necessary to obtain a second color format conversion from the camera input format, like NV12, to a format appropriate for display, such as ARGB. All of the operations in the Preparation stage are pixel-intensive operations appropriate to target for offload to the GPU.
    • Detection – Once a frame is prepared, the pipeline applies a feature detection algorithm, either Harris or Fast9, to the reduced-size grayscale input image. The algorithm returns a list of feature points detected in the image. The feature detection algorithm can be controlled by various parameters, including the threshold level. These parameters continuously adjust the feature point detection to return an optimal number of feature points and to adapt to changing ambient conditions, such as the brightness of the input scene. Non-maximal suppression is applied to the feature point calculation to get a better distribution of feature points, avoiding local "clustering." Both feature detection and non-maximal suppression are targeted for offload to the GPU.
    • Recognition – Once the features are generated by the Detection stage of the pipeline, the FERNS algorithm is used to match the features against a database of known objects. Instead of operating on the feature points directly, the FERNS algorithm uses a patch, a square region of pixels centered on the feature point. The patches are taken from a filtered version of the frame that has been convolved with a smoothing filter. Each of the patches is associated with a timestamp of the frame from which they were derived. Since the processing of each patch by the FERNS algorithm is an independent operation, it is easily parallelizable and a candidate for GPU offload. The frame smoothing can also happen on the GPU.
    • Tracking - Many image processing algorithms operate on multi-resolution images called image pyramids, where each level of the pyramid is a further downscaled version of the original input frame. The Tracking stage of the pipeline provides the image pyramid to the Lucas-Kanade optical flow algorithm to track the objects in the scene. Both the image pyramid generation and the optical flow are good candidates to run on the GPU.
    • Rendering – Rendering is the final stage of the pipeline. In this stage, the AR results are combined with the color video and rendered on the output, in this case using OpenGL*. The application renders the color video as an OpenGL texture and uses OpenGL functions to draw the graphics output, based on the video analysis, on top of the video frame.

    Optimization Strategy

    Initial profiling of the TI application confirmed that the pixel processing operations mentioned in the prior section were the primary bottlenecks in the AR pipeline. However, other bottlenecks existed, including a CPU-based copy of the color image data to an OpenGL texture.

    To simplify collaboration, Intel delivered the optimizations to Total Immersion as a library to be incorporated into the TI software. The library, dubbed PixelFlow, encapsulates the pixel processing required by the TI AR pipeline and is implemented using Intel IPP-A library. Intel and Total Immersion decided that PixelFlow would target the Preparation, Detection, and Rendering bottlenecks first, while also providing information required for the Recognition and Tracking stages. Moving the first stages of the pipeline to the GPU would be a milestone towards the eventual goal of handling all pixel processing operations on the GPU.

    To implement the Preparation and Detection stages, the operations performed by PixelFlow on the GPU included color format conversion, resizing, mirroring, Fast9 and Harris feature point detection, and non-maximal suppression. To support the Recognition and Tracking stages, the library provides a smoothed frame to be used by the FERNS algorithm and an image pyramid of the input to be used by the optical flow algorithm. Finally, PixelFlow also provides a GPU texture of the color input frame suitable for use in OpenGL.

    Implementation

    The PixelFlow framework was conceived as a flexible framework for analysis of multiple video input streams derived from a single video capture source. The PixelFlow pipeline runs on the GPU, operating asynchronously with the CPU. Each video capture source serves frames to one or more logical video streams, where the color format and resolution of each stream is independently configurable. Each stream runs on a separate thread and can use Intel IPP-A to analyze the video frames, producing meta information. The following diagram shows the general design of the framework.

    Design of the PixelFlow Framework
    Figure 2 – The Design of the PixelFlow Framework

    The TI Augmented Reality pipeline is comprised of two video streams: the Analytics Stream and the Graphics Stream. The Analytics Stream processes a grayscale input frame, performing feature detection with non-maximal suppression, image pyramid generation, and smoothing of the input frame. The Graphics Stream converts the color camera input to ARGB for display. In both cases, the resulting data is placed in a queue for access by the CPU-based code. The following diagram shows the basic organization of the pipeline and the functions targeted for offload to the GPU.

    PixelFlow implementation for the TI AR pipeline
    Figure 3 – The PixelFlow implementation for the TI AR pipeline

    The information on each queue has a timestamp of the original frame capture, allowing the CPU software to correlate each frame with the corresponding data produced by the analytics stream.

    Implementation Challenges

    Several challenges were encountered during the implementation of the PixelFlow framework:

    • Separate kernels for frame preparation– The initial PixelFlow implementation used separate Intel IPP-A functions for resizing, color format conversion, and mirroring. Because the functions didn't support multi-channel images to prepare the ARGB output for the Analytics Stream, the implementation used one Intel IPP-A function to split the input image into separate channels, then called other functions to resize and mirror each of the channels individually before combining them back into an interleaved format. To minimize the kernel overhead and simplify programming, the Intel IPP-A team developed a single hppiAdvancedResize function to combine the resize, color format conversion, and mirroring into a single GPU kernel, allowing the frame to be prepared for the Analytics Stream or the Graphics Stream with a single function call.
    • Direct-to-GPU-memory video input – The intention of the PixelFlow pipeline was to have the entire pipeline, from video capture to graphics rendering, on the GPU. However, the graphics drivers for the targeted platforms did not yet support direct-to-GPU-memory video capture. Instead, each frame was captured to system memory and then copied to GPU memory. To minimize the impact of the copy, the PixelFlow implementation took advantage of the Fast Copy feature supported by the Intel IPP-A library. Using a 4K-aligned system memory buffer, the GPU kernel is able to use shared physical memory to access the data, thus avoiding a copy.
    • NMS, weights, and orientation for Fast9 – The results produced by the Intel IPP-A Fast9 algorithm did not initially match the CPU-based function that it replaced. An investigation revealed that the TI code was also applying non-maximal suppression to the results of the Fast9 calculation. In addition, the TI code also calculated a weight and orientation value for each detected feature point. The team updated the Intel IPP-A Fast9 function to add NMS as an option and to return the weight and orientation values.
    • OpenGL surface sharing and DX9 surface import/export– OpenGL is used for rendering in this pipeline. The video frame is rendered as an OpenGL texture and other virtual elements are added by calling OpenGL drawing primitives. In the Frame Preparation stage of the pipeline, Intel IPP-A's AdvancedResize function converts the video frame from the input format (NV12, YUY2, etc.) to ARGB. A CPU-based copy of this image into an OpenGL texture was one of the top bottlenecks. The Intel IPP-A team added an import/export capability so that a DX9 surface handle could be extracted from an existing Intel IPP-A matrix, or an Intel IPP-A matrix could be created from an existing DX9 surface. This enabled the use of the OpenGL surface sharing capability in the Intel OpenGL driver. With is functionality, a DX9 surface could be shared with OpenGL as a texture, avoiding the CPU-based copy and keeping the data on the GPU.

    Additional Non-PixelFlow Optimizations

    After implementing the optimizations described in the previous section, a trace performed in the VTune™ analyzer showed that when tracking nine targets, with input video and analytics resolution at 1024x768, several hotspots remained in the computer vision module:

    Remaining Hotspots – Ivy Bridge
    Function% of CVDescription
    dcvGroupFernsRecognizer::RecognizeAll18.95Using x87 floating point. Should try using SIMD floating point instructions such as Intel® SSE3 or Intel® AVX.
    dcvGaussianPyramid3x3::ConstructFirstPyramidLevelOptim16.76General code generation issues. Expect these would be improved by using the Intel® compiler.
    dcvPolynomSolver::solve_deg310.20General code generation issues. Expect these would be improved by using the Intel compiler.

     

    After building the computer vision module with the Intel® compiler with Intel® AVX instructions enabled, the hotspots were eliminated.

    Remaining Hotspots – Ivy Bridge
    Function% of CVDescription
    dcvGaussianPyramid3x3::ConstructFirstPyramidLevelOptim33.56Image pyramid generation.
    dcvCorrelationsDetectorLite::ComputerIntegralImage16.83Integral image computation.
    dcvKtlOptim::__CalcOpticalFlowPyrLK_Optim_ResizeNN_levels13.0LK optical flow.

    The second trace uncovered an instance in the code that still used the old CPU-based image pyramid calculation. The instance was updated to use the image pyramid calculated by PixelFlow. The remaining hotspots were additional operations that were not yet included in PixelFlow, integral image, and LK optical flow. The team will target these functions first when extending the PixelFlow functionality.

    Results – Performance and Power

    The resulting AR pipeline offloads its initial stages to the GPU and provides data for subsequent stages of AR processing. To analyze the PixelFlow implementation of the AR pipeline, the team used a test application from Total Immersion, the "AR Player." This configurable test application allows the user to set operating parameters like the number of targets to track, the video capture resolution and format, the analytics processing resolution, and so on. In addition to the power and performance statistics, the team was interested in the feasibility and impact of increasing the analytics resolution. For the pre-optimized CPU-based flow, the TI AR software used a 320x240 analytics resolution. The additional performance provided by the GPU offload allowed us to experiment with higher resolutions and the resulting impact on responsiveness and quality. The team tested PixelFlow implementation on Ivy Bridge and Bay Trail platforms.

    Results: Ivy Bridge

    We tested the software on the following Ivy Bridge platform:

    Ivy Bridge Platform Details
    ItemDescription
    ComputerHP EliteBook* 8470p
    ProcessorIntel® Core™ I7 processor 3720QM
    Clock Speed2.6 GHz (3.6 GHz Max Turbo Frequency)
    # Cores, Threads4, 8
    L1, L2, L3 Cache256 KB, 1 MB, 6 MB
    RAM8 GB
    GraphicsIntel® HD Graphics 4000
    # of Execution Units16
    Graphics DriverIgdumdim64, 9.18.10.3257, Win7 64-bit
    OSWindows* 7 Pro (Build 7601), 64-bit, SP1

    The first test scenario tracked nine targets simultaneously, with both a video capture resolution and an analytics resolution of 640x480.

    Test Scenario #1

    MetricValue
    Number of targets9
    Capture resolution640x480
    Analytics resolution640x480
    Performance Results – Ivy Bridge, Test Scenario #1
    Processor Number

    Software (ms)

    PixelFlow (ms)Difference (ms)Difference (%)
    Rendering FPS

    60

    60  
    Analytics FPS

    30

    30  
    Tracking FPS

    30

    30  
    Frame Preprocessing

    0.399

    0.088-0.311-77.83
    Tracking

    1.412

    1.355-0.057-4.03
      Construct Pyramid

    0.548

    0.025-0.523-95.44
    Recognition

    3.322

    1.477-1.846-55.55
      Compute Interest Points

    1.358

    0.035-1.323-97.43
      Smooth Image

    0.693

    0.001-0.692-99.89

    The second test scenario also tracks nine targets, but increases the video capture resolution to 1024x768 with an analytics resolution of 640x480.

    Test Scenario #2

    MetricValue
    Number of targets9
    Capture resolution1024x768
    Analytics resolution640x480
    Performance Results – Ivy Bridge, Test Scenario #2
    Processor NumberSoftware (ms)PixelFlow (ms)Difference (ms)Difference (%)
    Rendering FPS

    60

    60  
    Analytics FPS

    30

    30  
    Tracking FPS

    30

    30  
    Frame Preprocessing

    0.391

    0.094-0.297-75.99
    Tracking

    1.355

    0.900-0.455-33.58
      Construct Pyramid

    0.532

    0.024-0.508-95.58
    Recognition

    2.844

    0.917-1.927-67.77
      Compute Interest Points

    1.225

    0.027-1.199-97.83
      Smooth Image

    0.708

    0.001-0.7070-99.93

    Results: Bay Trail

    Similar tests were run on the following Bay Trail platform:

    Bay Trail Platform Details
    ItemDescription
    ComputerIntel® Atom™ (Bay Trail) Tablet PR1.1B
    ProcessorIntel® Atom™ processor Z3770
    Clock Speed1.46 GHz
    # Cores, Threads4, 4
    L1, L2, L3 Cache128 KB, 2048 KB
    RAM2 GB
    GraphicsIntel® HD Graphics
    # of Execution Units4
    Graphics DriverIgdumdim32.dll, 10.18.10.3341, Win8 32-bit
    OSWindows* 8 (Build 9431), 32-bit

    The test scenario is slightly different than the first test scenario run on the Ivy Bridge platform due to the different resolutions supported by the camera on the Bay Trail system.

    Test Scenario #1
    MetricValue
    Number of targets9
    Capture resolution640x360
    Analytics resolution640x360
    Performance Results – Bay Trail, Test Scenario #1
    Processor NumberSoftware (ms) PixelFlow (ms)Difference (ms)Difference (%)
    Rendering FPS5535  
    Analytics FPS3030  
    Tracking FPS

    15

    15  
    Frame Preprocessing

    5.215

    0.385-4.830-92.62
    Tracking15.48410.411-5.074-32.77
      Construct Pyramid6.0810.122-5.985-97.99
    Recognition28.38915.590-12.799-45.09
      Compute Interest Points9.2350.365-8.870-96.04
      Smooth Image7.2360.0110.7255-99.85

    The second scenario for Bay Trail tests the video capture resolution at 1280x720, while the analytics resolution remains at 640x460.

    Test Scenario #2
    MetricValue
    Number of targets9
    Capture resolution1280x720
    Analytics resolution640x360
    Performance Results – Bay Trail, Test Scenario #2
    Processor NumberSoftware (ms)PixelFlow (ms)Difference (ms)Difference (%)
    Rendering FPS

    12

    30   
    Analytics FPS

    30

    25  
    Tracking FPS812  
    Frame Preprocessing

    4.865

    0.408-4.458-91.62
    Tracking

    16.158

    9.718-6.440-39.86
      Construct Pyramid

    5.995

    0.122-5.872-97.96
    Recognition

    32.398

    14.532-17.865-55.14
      Compute Interest Points8.8640.376-8.488-95.76
      Smooth Image7.3370.013-7.324-99.82

    Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

    For more complete information about performance and benchmark results, visit Performance Test Disclosure

    Power Analysis

    After implementing GPU offload using the PixelFlow pipeline, investigations into the power savings achieved by the GPU offload yielded unexpected results; instead of achieving a significant power savings from offloading the processing to the GPU from the CPU, the power consumption of the PixelFlow implementation was on par with the CPU-only implementation. The following GPUView trace shows why this occurred.

    GPUView trace of the processing for a single frame
    Figure 4 –GPUView trace of the processing for a single frame

    The application dispatched the work to the GPU in separate chunks: CPU setup, GPU operation, wait for completion, CPU setup, GPU operation, wait for completion, etc. This approach impacted power consumption, causing the processor package to be continually active and not allowing the processor to enter deeper sleep states.

    Instead, the pipeline should consolidate GPU operations and maximize CPU/GPU concurrency. The following diagram illustrates the ideal situation to achieve maximum power savings: GPU operations consolidated into a single block, executing concurrently with CPU threads and leaving a period of inactivity that allows the processor package to achieve deeper sleep states.

    Ideal pattern to maximize power savings
    Figure 5 – Ideal pattern to maximize power savings

    Conclusion

    Moving the key pixel processing bottlenecks of the Total Immersion AR pipeline to the GPU resulted in performance gains on Intel processors, allowing the application to use a larger input frame size for video analysis, find targets faster, track more targets, and track them more smoothly. We expect similar gains can be achieved for similar video analysis pipelines.

    While achieving performance benefits using Intel IPP-A is fairly straightforward, achieving power benefits requires a careful design of the processing pipeline. The best is one that consolidates the GPU operations and maximizes CPU/GPU concurrency to allow the processor to reach deeper sleep states. Diagnostic and profiling tools that are GPU-capable, like GPUView and Intel VTune analyzer, are essential as they can help to identify power-related problems with the pipeline. Consider using these tools during development to verify the power efficiency of a pipeline and avoid having to re-architect a pipeline to address power-related issues.

    The PixelFlow pipeline offloaded several of the pixel processing bottlenecks in the TI pipeline. Work remains to move additional operations to the GPU such as integral image, optical flow, FERNS, etc. Once these operations are included in PixelFlow, all of the pixel processing will occur on the GPU with these operations returning metadata to the CPU as input for higher-level operations. The success of the current PixelFlow implementation, which uses IPP-A-based GPU offload, indicates that further gains are possible with additional offloading of pixel processing operations.

    Finally, power and performance optimization can go beyond just the vision processing algorithms, but can extend to other areas such as video input, codecs, and graphics output. Intel IPP-A allows for DX9-based surface sharing with related Intel technologies such as the Intel® Media SDK for codecs and the OpenGL graphics driver. Understanding the optimization opportunities with these related technologies is also important. This allows developers to create entire GPU-based processing pipelines.

    Author Biographies

    Michael Jeronimo is a software architect and applications engineer in Intel's Software and Solutions Division (SSG), focused on helping customers to accelerate computer vision workloads using the GPU.

    Pascal Mobuchon is the VP of Engineering at Total Immersion.

    References

    Item

    Location

    Total Immersion web sitehttp://www.t-immersion.com/
    Total Immersion Wikipedia pagehttp://en.wikipedia.org/wiki/Total_Immersion_(augmented_reality)
    Augmented Reality – Wikipedia pagehttp://en.wikipedia.org/wiki/Augmented_reality
    Intel® VTune™ Amplifier XEhttps://software.intel.com/en-us/intel-vtune-amplifier-xe
    Intel® Graphics Performance Analyzershttps://software.intel.com/en-us/vcsource/tools/intel-gpa
    GPUViewhttp://msdn.microsoft.com/en-us/library/windows/hardware/ff570133(v=vs.85).aspx
    Intel® IPP-A web sitehttps://software.intel.com/en-us/intel-ipp-preview
  • Intel® Integrated Performance Primitives
  • Intel® IPP
  • Augmented Reality Pipeline
  • GPU
  • GPU optmization
  • total immersion
  • video analytics
  • GPU pipeline
  • Developers
  • Arduino
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Intermediate
  • Intel® Integrated Performance Primitives
  • Intel® VTune™ Amplifier XE
  • Game Development
  • Graphics
  • Intel® Atom™ Processors
  • Intel® Core™ Processors
  • Power Efficiency
  • URL
  • Cómo agregar sonido a los juegos en HTML5 para dispositivos Android* basados en Arquitectura Intel®

    $
    0
    0

    Introducción

    El sonido es uno de los componentes más importantes de los juegos interactivos. Para impresionar a los jugadores, es necesario que los juegos no solo tengan gráficos de alta calidad y una historia atrayente, sino también sonidos que produzcan gran sensación. Agregar efectos de sonido a los juegos o aplicaciones no solo los hace más entretenidos, sino que también aumenta la percepción general de su calidad.

    Etiqueta de audio

    Algunas de las fascinantes nuevas características de HTML 5 son las etiquetas de sonido y video. Es posible que, con el tiempo, estas etiquetas lleguen a reemplazar a las tecnologías de video populares de la actualidad. Para usar audio o video con HTML5, hay que comenzar por crear un elemento <audio> y especificar una dirección URL de origen del audio, incluido el atributo de los controles.

    <audio controls><source src="horse.ogg" type="audio/ogg"><source src="horse.mp3" type="audio/mpeg">
    Your browser does not support the audio element.</audio>

    Este rasgo incluye los controles de sonido, como por ejemplo, reproducción, detención y graduación de volumen. El componente admite diferentes componentes . Los componentes se pueden conectar a diversos registros de . Los tipos MIME (también conocidos como Tipos de Medios de Internet) permiten caracterizar formatos de archivo de modo tal que el marco de trabajo sepa cómo tomarlos. Junto con el origen, hay que especificar un atributo de tipo. Este atributo le indica al programa la clase de MIME y los códecs de los elementos multimedia proporcionados antes de que los descargue. Si no se proporciona el atributo, el navegador utiliza un método experimental para intentar reconocer el tipo de medio. El explorador utilizará la configuración que identifique inicialmente, y si no reconoce el formato, optará por el predeterminado.

    Método canPlayType

    Afortunadamente, la API de sonido nos brinda una manera de descubrir si el navegador móvil es compatible con cierto formato determinado. Para captar el elemento de , podemos marcar nuestro elemento en HTML como se muestra a continuación:

    var audio = document.getElementById('myaudio');

    Otra posibilidad es generar nuestro elemento por entero en JavaScript*:

    var audio = new Audio();

    Una vez que tenemos el componente de sonido, ya estamos preparados para ingresar sus métodos y propiedades. Una manera de comprobar la compatibilidad del formato es emplear la técnica canplaytype, en la cual se toma como parámetro un tipo de MIME:

    audio.canPlayType('audio/mpeg’);
    

    canPlayType genera uno de tres valores:

    1. probably (probablemente)
    2. maybe (quizás)
    3. “” (cadena vacía)

    La razón de estos tipos extraños de respuestas es la anormalidad general de todo lo relacionado con los códecs. El navegador móvil puede reconocer si un códec es reproducible sin necesidad de intentar reproducirlo.

    Tipos de MIME para formatos de audio

    Atributos

    Las etiquetas de HTML están compuestas por uno o más atributoso características. Se agregan atributos a las etiquetas para proporcionar al navegador datos adicionales acerca de cómo podría aparecer o comportarse la etiqueta. Las características se componen de un nombre y un valor que se diferencian mediante un signo de igual (=); con el valor entre comillas dobles. Un ejemplo sería: style "color: blue";

    En la sección siguiente se describen brevemente los atributos específicos de la etiqueta/elemento <audio>.

    src: indica la ubicación del archivo de audio. Su valor debe ser el URL de un archivo de audio. preload: cuando se reproducen archivos grandes, es mejor almacenar los registros en búfer. Para hacer esto, hay que usar el atributo preload. Este atributo nos permite darle una pista al navegador de que intentamos almacenar el registro en búfer antes de reproducirlo, con vistas a una experiencia óptima del cliente. Los valores posibles son:

    • none (ninguno)
    • metadata (metadatos)
    • auto

    autoplay:

    Indica si comenzar o no a reproducir el sonido ni bien se ha cargado el objeto.

    Se trata de un atributo booleano. Por lo tanto, la presencia de este atributo equivale a un valor verdadero. También podemos especificar un valor que coincida, con distinción de mayúsculas y minúsculas, con el nombre canónico del atributo, sin espacios en blanco anteriores o posteriores (es decir, autoplay o autoplay="autoplay").

    Valores posibles:

    • [cadena vacía]
    • autoplay

    mediagroup:

    Este atributo se utiliza para sincronizar la reproducción de archivos de audio (o elementos multimedia). Nos permite especificar elementos multimedia que se deben vincular entre sí. El valor es una cadena de texto, por ejemplo: mediagroup=album. El navegador o agente del usuario vincula automáticamente los archivos de audio o elementos multimedia que tangan el mismo valor.

    Un caso en el que se podría emplear es cuando se debe superponer una pista de traducción de lengua de señas de un video sobre otro.

    loop:

    Este atributo indica si se debe continuar con la reproducción de audio cuando ha terminado.

    Es un atributo booleano. Por lo tanto, la presencia de este atributo equivale a un valor verdadero. También podemos especificar un valor que coincida, con distinción de mayúsculas y minúsculas, con el nombre canónico del atributo, sin espacios en blanco anteriores o posteriores (es decir, loop o loop="loop").

    Valores posibles:

    • [cadena vacía]
    • loop

    controls:

    En lugar de reproducir sonidos de manera automática, algo no recomendable, es posible hacer que el navegador presente algunos controles, como podrían ser ajuste de volumen y reproducción/pausa. Para esto, es necesario agregar el atributo de controles a la etiqueta.

    Este atributo es booleano. En consecuencia, la presencia de este atributo equivale a un valor verdadero. También podemos especificar un valor que coincida, con distinción de mayúsculas y minúsculas, con el nombre canónico del atributo, sin espacios en blanco anteriores o posteriores (es decir, controls o controls="controls").

    Valores posibles:

    • [cadena vacía]
    • controls

    Cómo controlar la reproducción multimedia

    Una vez que hemos insertado los elementos multimedia en nuestro documento HTML con los nuevos componentes, podemos controlarlos automáticamente desde el código JavaScript. Por ejemplo, para iniciar (o reiniciar) la reproducción, podemos hacer esto:

    var v = document.getElementsByTagName(“myaudio”);
    v.play();
    

    La primera línea toma el primer componente de audio del archivo y la segunda llama a la estrategia de reproducción del componente, que se usa para actualizar los componentes multimedia. Regular un reproductor de sonido HTML5 para que reproduzca, detenga la reproducción, aumente el volumen y lo baje con un poco de código JavaScript es un proceso muy directo:

    document.getElementById('demo').play() //Reproducir audio
    document.getElementById('demo').pause() //Poner en pausa el audio
    document.getElementById('demo').volume+=0.1 // Subir volumen
    document.getElementById('demo').volume-=0.1 // Bajar volumen

    Búsqueda entre elementos multimedia

    Los componentes multimedia permiten mover la posición de reproducción actual a lugares específicos del contenido multimedia. Para hacer esto, hay que establecer el valor de la propiedad currenttime del componente. En esencia, hay que poner la cantidad de segundos que se desea que avance la reproducción.

    Podemos hacer uso de la propiedad de búsqueda del componente para obtener los tiempos de inicio y finalización del elemento multimedia. Lo que se obtiene aquí es un objeto TimeRanges, que incluye un listado con los tiempos hacia los cuales se puede dirigir la búsqueda.

    var audioElement = document.getElementById(“myaudio”);
    audioElement.seekable.start();  // Devuelve el tiempo inicial (en segundos)
    audioElement.seekable.end();    // Devuelve el tiempo final (en segundos)
    audioElement.currentTime = 122; // Busca los 122 segundos
    audioElement.played.end();      // Devuelve la cantidad de segundos que reprodujo el navegador
    

    Biblioteca SimpleGame

    La biblioteca simpleGame hace que sea muy fácil componer nuevos sonidos por medio de agregar un objeto Sound. El objeto Sound de la biblioteca simpleGame se basa en la etiqueta <sound> de HTML5.

    <script type="text/javascript"
       src = "simpleGame.js"></script><script type="text/javascript">

    Es muy sencillo manejar los efectos de sonido con la biblioteca simpleGame:

    1. Hay que crear el efecto de sonido. Los mejores formatos son mp3 y ogg.
    2. Hay que crear una variable para contener el efecto de sonido. Es importante recordar definir la variable fuera de la función.
    3. La biblioteca SimpleGame tiene un objeto Sound. A continuación, se debe crear una instancia de esto para componer el sonido. El objeto de sonido requiere de un parámetro. Se puede establecer el parámetro en la función init.
    4. El sonido se puede reproducir con el método play() del objeto.

    Canvas directo de AppMobi

    Para suplementar sus aptitudes para HTML5, puede que los desarrolladores deseen explorar las herramientas de desarrollo y el entorno de AppMobi para compilar aplicaciones robustas. La tecnología App Game Interface (AGI) de AppMobi proporciona a las aplicaciones HTML5 híbridas la capacidad de acelerar sus instrucciones de etiqueta canvas. Esta tecnología fue desarrollada por AppMobi (http://www.appmobi.com), una empresa de servicios HTML5 que inicialmente se llamaba directCanvas.

    Para usar la AGI, primero debemos comprender cómo funciona. La instrucción de canvas acelerado por AGI necesita ser apilada en su propia "perspectiva", de manera similar al esquema HTML, en la cual estas órdenes se traducen a un nivel más alto y se ejecutan más rápido. Pero esta vista dividida no incorpora el acceso a todo el modelo de objeto de archivo (DOM) y debería depender de un cargo de expansión para intercambiar información entre la vista Web común y la vista acelerada.

    El código para la "perspectiva" acelerada queda bajo la vista Web HTML5, lo cual implica que todos los componentes gráficos incorporados en los documentos HTML de la provisión API de AGI se representarán de manera fiable por encima de los gráficos acelerados.

    Uso de las prestaciones de sonido de la AGI

    La tecnología App Game Interface (AGI) ha resuelto ciertos puntos débiles de HTML5 respecto del sonido, con actualizaciones de sonidos múltiples. No era unos de los fines de HTML5 reproducir diferentes sonidos poco convencionales con baja latencia, sin embargo, eso es precisamente lo que tanto necesitan los juegos y otras aplicaciones. La innovación de la AGI con los sonidos múltiples permite que todos los elementos de los juegos reproduzcan un sonido cuando tengan que hacerlo, sin tomar en consideración otros sonidos que puedan reproducirse en simultáneo. Es posible acceder a todas las API de AppMobi por medio del objeto Appmobi.context y su finalidad es mejorar la ejecución y ampliar las posibilidades de uso.

    Estas estrategias se pueden emplear para controlar un sonido de fondo solitario:

    startBackgroundSound:

    Este método inicia la reproducción de un sonido que se reproduce continuamente de fondo.

    La App Game Interface de canvas acelerado puede manejar un único sonido de fondo. Se debe usar este método para iniciar la reproducción de un sonido o música de fondo. Este comando se incluye, además del objeto Audio, para mejorar el rendimiento y facilitar el uso.

    Sintaxis

    AppMobi.context.startBackgroundSound("sounds/music_main.mp3",true)

    El primer parámetro es la ruta y el nombre del archivo del sonido de fondo que se desea reproducir, mientras que el segundo parámetro es un valor booleano opcional que especifica si este sonido de fondo se debe repetir intermitentemente o no.

    toggleBackgroundSound

    Se debe usar este método para alternar entre la reproducción de un sonido y su detención. La App Game Interface de canvas acelerado puede manejar un único sonido de fondo. Se debe usar este método para alternar entre la reproducción de un sonido o música de fondo y su detención. Este comando se incluye, además del objeto Audio, para mejorar el rendimiento y facilitar el uso.

    Sintaxis

    AppMobi.context.toggleBackgroundSound();

    stopBackgroundSound

    Use comando sirve para detener la reproducción del sonido de fondo. La App Game Interface de canvas acelerado puede manejar un único sonido de fondo. Se debe usar este método para detener la reproducción de un sonido o música de fondo. Este comando se incluye, además del objeto Audio, para mejorar el rendimiento y facilitar el uso.

    Sintaxis

    AppMobi.context.stopBackgroundSound()

    Conclusión

    A pesar de ciertos comportamientos impredecibles de los navegadores, HTML5 es una tecnología fantástica para crear nuevas aplicaciones potentes. En este artículo, hemos estudiado cómo incorporar sonido en nuestras aplicaciones con el componente de audio de HTML5 y hemos visto la tecnología AGI de AppMobi, que brinda herramientas adicionales para desarrollar maravillosas aplicaciones. Es posible combinar otras tecnologías o herramientas como JavaScript, phonegap, appmobi, etc. con HTML5 para abrir la posibilidad de escribir aplicaciones que normalmente exigirían código nativo.

    Más recursos

    HTML5 es una tendencia a futuro en el mundo del desarrollo de aplicaciones. Intel cree que es importante ayudar a los desarrolladores experimentados a hacer la transición a este enfoque multiplataforma y a los desarrolladores nuevos a ponerse rápidamente en ritmo con esta táctica que tanto entusiasmo despierta, para que así puedan desarrollar sus aplicaciones y juegos en casi todas las plataformas informáticas modernas. Hay más recursos a disposición en las páginas Intel HTML5 y Intel Android.

     

    Algunos demos interesantes de uso de sonido en HTML5:
    http://www.createjs.com/#!/SoundJS/demos/visualizer
    http://www.createjs.com/#!/SoundJS/demos/game
    http://www.createjs.com/#!/SoundJS/demos/explosion

    Other Related Articles and Resources

    Creating cool animations and transitions in HTML5 app for Intel Android devices.
    Using the touch screen in your HTML5 games for Intel Android device
    working with audio tag in HTML5
    HTML5 New tags
    HDMI Audio Case Study: Denon AV Receivers

    To learn more about Intel tools for the Android developer, visit Intel® Developer Zone for Android.
     

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
    Copyright © 2013 Intel Corporation. All rights reserved.
    *Other names and brands may be claimed as the property of others.

  • Developers
  • Android*
  • Android*
  • HTML5
  • Game Development
  • URL
  • Theme Zone: 

    Android

    How Sponsors and Developer Evangelists Can Win at Hackathons

    $
    0
    0

    I recently wrote here about the 2014 SoHacks hackathon, and doing so inspired me to do a general write-up on hackathon best practices based on my experience.  I hear comments now and then from those involved with developer evangelism that hackathons are a waste of time and money, and I believe this article can help put that broad claim to rest.

    For those unfamiliar, hackathons are brief, highly-energized events where makers of various backgrounds are invited to, well, make things.  Typically technical things.  They're usually two-day events sponsored by tech industry giants and supported by volunteer mentors.  Those in which I've participated have seen a rich variety of ages and skills, but I've noticed a trending tendency toward middle and high school participants.  The SoHacks event, for example, was comprised entirely of this demographic.

    With the background stuff out of the way, we're left with the question looking for an answer: what good are hackathons?  How do sponsors and developer evangelists benefit from these occasional one-off experiences?  Here are my thoughts:

    Go in with appropriate expectations

    If you expect every hackathon to launch The Next Big Thing, you may very well be consistently disappointed.  Even though I've witnessed some amazing accomplishments during these high-voltage, compressed time frames, typically you'll see a great many more unfinished ideas.  There's also a lot of duplicated effort, as teams intentionally or unwittingly select projects that have been tried (and often failed) at other hackathons.  Many of the projects can't possibly be completed in the time given, and sometimes not even in more conventional work scenarios.  So don't disappoint yourself by looking for industry-changing start-ups to burst forth from any event.  Odds are against it.

    Does that make hackathons a waste?  By no means!  Read on.

    Design your approach

    Work with your outreach teams and local community leaders to develop a strategic "plan of attack".  Benchmark against other, similar events.  Brainstorm on what you intend to achieve.  Don't just plunge in unprepared!  Speaking of which...

    Prepare your communities

    Want the hackathon teams implementing your products and technologies to shine?  Then simply do your jobs as leaders: get them ready.  For your known developers especially, arrange to meet with them at their own community events beforehand and make sure they know what you can provide.  Offer free training, loan them equipment, do whatever you deem necessary to support a team that will make your tech rock.  That said, never work against the rules of the hackathon.

    In general, reach out to all potential attendees and make sure they're likewise informed.  If you're offering a prize with conditions attached, make it very clear what they are and how those requirements may be fulfilled.  I've seen teams with otherwise outstanding projects lose simply because they did not know or understand such requirements.  If you're a sponsor, consider advertising prizes and expectations for upcoming hackathons, especially on your own website.

    Become an active part of the event

    Don't just mail in your support.  Handouts (also known as landfill food), swag and even prizes only go so far.  Show up at hackathons and pitch your product.  Have expert staff on hand with the necessary skills to support your offerings.  If participants need tools, bring them and provide them freely (sponsors commonly "loan" these out with no return date).

    And don't just take my advice on what to do-- here are some useful tips from ProgrammableWeb on what to avoid: Top Ten Mistakes of Running Hackathons

    Understand and work with typical outcomes

    So you get that stories like GroupMe and PhoneGap are the exception (both emerged from hackathons or code camps).  What do you stand to gain from these events?

    • Goodwill.  Never underestimate it.  When I was a Nokia Developer Ambassador, I consistently received compliments from attendees who expressed appreciation that Nokia was the only sponsor physically represented.  Over time, this won developers over to our platform.  Is your director looking for bottom-line results?  Converted developers is a measurable one.
    • Free press.  People will write about your support and involvement.  Make sure you're an active contributor to those messages.  Do it right and they're blogging for you, not about you.  Just don't be manipulative in the process.
    • Feedback/Experience.  At a hackathon you'll watch how your stuff is used, especially under pressure.  You'll see first-hand how and where it breaks, and identify opportunities for improvement.  You'll also gain keen insight into your competition.

    Keep things going

    Think of hackathons as punctuated equilibrium, infrequent opportunities to fire up your communities.  In between, get busy with sustaining activities.  Establish a "hacking continuum".

    Follow up each event with a postmortem.  Ask your community attendees what they liked, disliked, etc.  And don't just settle for an email survey-- do this in person.  Surveys are great and necessary for usable metrics, but physical presence is key.

    As noted earlier, another critical part of the sustaining activity is pre-hackathon preparatory work.  Do it, and you'll generate valuable community goodwill and ensure that your technologies and products are presented in the best possible light.

    Bottom line

    It's easy to dismiss hackathon outcomes if you look at the events through strictly conventional corporate lenses.  Strip away the context of what I've presented here, and just about every one of them could be labeled a failure.  Don't make that mistake, sponsors and evangelists.  As with most things, "you get out what you put in" with hackathons.  So put in some good, productive effort!

  • hackathon
  • Event
  • students
  • Best Practice
  • Icon Image: 

  • Development Tools
  • Education
  • Game Development
  • Open Source
  • Developers
  • Students
  • Android 开发之多线程处理、Handler 详解

    $
    0
    0

    Android开发过程中为什么要多线程

    我们创建的Service、Activity以及Broadcast均是一个主线程处理,这里我们可以理解为UI线程。但是在操作一些耗时操作时,比如I/O读写的大文件读写,数据库操作以及网络下载需要很长时间,为了不阻塞用户界面,出现ANR的响应提示窗口,这个时候我们可以考虑使用Thread线程来解决。

      Android中使用Thread线程会遇到哪些问题

    对于从事过J2ME开发的程序员来说Thread比较简单,直接匿名创建重写run方法,调用start方法执行即可。或者从Runnable接口继承,但对于Android平台来说UI控件都没有设计成为线程安全类型,所以需要引入一些同步的机制来使其刷新,这点Google在设计Android时倒是参考了下Win32的消息处理机制。

    postInvalidate()方法

    对于线程中的刷新一个View为基类的界面,可以使用postInvalidate()方法在线程中来处理,其中还提供了一些重写方法比如postInvalidate(int left,int top,int right,int bottom) 来刷新一个矩形区域,以及延时执行,比如postInvalidateDelayed(long delayMilliseconds)postInvalidateDelayed(long delayMilliseconds,int left,int top,int right,int bottom) 方法,其中第一个参数为毫秒,如下:

    void

    postInvalidate()

    void

    postInvalidate(int left, int top, int right, int bottom)

    void

    postInvalidateDelayed(long delayMilliseconds)

    void

    postInvalidateDelayed(long delayMilliseconds, int left, int top, int right, int bottom)

    Handler

    当然推荐的方法是通过一个Handler来处理这些,可以在一个线程的run方法中调用handler对象的postMessagesendMessage方法来实现,Android程序内部维护着一个消息队列,会轮训处理这些,如果你是Win32程序员可以很好理解这些消息处理,不过相对于Android来说没有提供PreTranslateMessage这些干涉内部的方法。

    消息的处理者,handler负责将需要传递的信息封装成Message,通过调用handler对象的obtainMessage()来实现。将消息传递给Looper,这是通过handler对象的sendMessage()来实现的。继而由LooperMessage放入MessageQueue中。Looper对象看到MessageQueue中含有Message,就将其广播出去。该handler对象收到该消息后,调用相应的handler对象的handleMessage()方法对其进行处理。

    Handler主要接受子线程发送的数据,并用此数据配合主线程更新UI.
          
    当应用程序启动时,Android首先会开启一个主线程 (也就是UI线程) , 主线程为管理界面中的UI控件,进行事件分发,比如说,你要是点击一个 Button ,Android会分发事件到Button上,来响应你的操作。  如果此时需要一个耗时的操作,例如:联网读取数据, 或者读取本地较大的一个文件的时候,你不能把这些操作放在主线程中,,如果你放在主线程中的话,界面会出现假死现象,如果5秒钟还没有完成的话,,会收到Android系统的一个错误提示  "强制关闭".  这个时候我们需要把这些耗时的操作,放在一个子线程中,因为子线程涉及到UI更新,,Android主线程是线程不安全的,也就是说,更新UI只能在主线程中更新,子线程中操作是危险的.这个时候,Handler就出现了,来解决这个复杂的问题由于Handler运行在主线程中(UI线程中),  它与子线程可以通过Message对象来传递数据,这个时候,Handler就承担着接受子线程传过来的(子线程用sedMessage()方法传弟)Message对象,(里面包含数据)  ,把这些消息放入主线程队列中,配合主线程进行更新UI


    Handler
    一些特点:handler可以分发Message对象和Runnable对象到主线程中,每个Handler实例,都会绑定到创建他的线程中(一般是位于主线程),
          
    它有两个作用: (1)安排消息或Runnable在某个主线程中某个地方执行

                               (2)安排一个动作在不同的线程中执行
            Handler
    中分发消息的一些方法
            post(Runnable)
            postAtTime(Runnable,long)
            postDelayed(Runnable long)
            sendEmptyMessage(int)
            sendMessage(Message)
            sendMessageAtTime(Message,long)
            sendMessageDelayed(Message,long)
          
    以上post类方法允许你排列一个Runnable对象到主线程队列中,sendMessage类方法,允许你安排一个带数据的Message对象到队列中,等待更新.

    Handler实例
        // 
    子类需要继承Hendler类,并重写handleMessage(Message msg) 方法,用于接受线程数据
         // 
    以下为一个实例,它实现的功能 :通过线程修改界面Button的内容

    public class MyHandlerActivity extends Activity {

        Button button;

        MyHandler myHandler;

        protected void onCreate(Bundle savedInstanceState) {

            super.onCreate(savedInstanceState);

            setContentview(R.layout.handlertest);

            button = (Button) findViewById(R.id.button);

            myHandler = new MyHandler();

            //当创建一个新的Handler实例时,它会绑定到当前线程和消息的队列中,开始分发数据

            // Handler有两个作用, (1) :定时执行MessageRunnalbe对象

            // (2):让一个动作,在不同的线程中执行.

            //它安排消息,用以下方法

            // post(Runnable)

            // postAtTime(Runnable,long)

            // postDelayed(Runnable,long)

            // sendEmptyMessage(int)

            // sendMessage(Message);

            // sendMessageAtTime(Message,long)

            // sendMessageDelayed(Message,long)

            //以上方法以 post开头的允许你处理Runnable对象

            //sendMessage()允许你处理Message对象(Message里可以包含数据,)

            MyThread m = new MyThread();

            new Thread(m).start();

        }

        /**

         *接受消息,处理消息 ,Handler会与当前主线程一块运行

         * */

        class MyHandler extends Handler {

            public MyHandler() {

            }

            public MyHandler(Looper L) {

                super(L);

            }

            //子类必须重写此方法,接受数据

            @Override

            public void handleMessage(Message msg) {

                // TODO Auto-generated method stub

                Log.d("MyHandler", "handleMessage......");

                super.handleMessage(msg);

                //此处可以更新UI

                Bundle b = msg.getData();

                String color = b.getString("color");

                MyHandlerActivity.this.button.append(color);

            }

        }

        class MyThread implements Runnable {

            public void run() {

                try {

                    Thread.sleep(10000);

                } catch (InterruptedException e) {

                    // TODO Auto-generated catch block

                    e.printStackTrace();

                }

                Log.d("thread.......", "mThread........");

                Message msg = new Message();

                Bundle b = new Bundle();//存放数据

                b.putString("color", "我的");

                msg.setData(b);

                MyHandlerActivity.this.myHandler.sendMessage(msg); //Handler发送消息,更新UI

            }

        }
    }

      Looper

    其实Android中每一个Thread都跟着一个LooperLooper可以帮助Thread维护一个消息队列,昨天的问题 Can't create handler inside thread 错误 一文中提到这一概念,但是LooperHandler没有什么关系,我们从开源的代码可以看到Android还提供了一个Thread继承类HanderThread可以帮助我们处理,在HandlerThread对象中可以通过getLooper方法获取一个Looper对象控制句柄,我们可以将其这个Looper对象映射到一个Handler中去来实现一个线程同步机制,Looper对象的执行需要初始化Looper.prepare方法就是昨天我们看到的问题,同时推出时还要释放资源,使用Looper.release方法。

    LooperMessageQueue的管理者。每一个MessageQueue都不能脱离Looper而存在,Looper对象的创建是通过prepare函数来实现的。同时每一个Looper对象和一个线程关联。通过调用Looper.myLooper()可以获得当前线程的Looper对象 
    创建一个Looper对象时,会同时创建一个MessageQueue对象。除了主线程有默认的Looper,其他线程默认是没有MessageQueue对象的,所以,不能接受Message。如需要接受,自己定义一个Looper对象(通过prepare函数),这样该线程就有了自己的Looper对象和MessageQueue数据结构了。 
    Looper
    MessageQueue中取出Message然后,交由HandlerhandleMessage进行处理。处理完成后,调用Message.recycle()将其放入Message Pool中。

    Message

    对于AndroidHandler可以传递一些内容,通过Bundle对象可以封装StringInteger以及Blob二进制对象,我们通过在线程中使用Handler对象的    sendEmptyMessagesendMessage方法来传递一个Bundle对象到Handler处理器。对于Handler类提供了重写方法handleMessage(Message msg) 来判断,通过msg.what来区分每条信息。将Bundle解包来实现Handler类更新UI线程中的内容实现控件的刷新操作。相关的Handler对象有关消息发送sendXXXX相关方法如下,同时还有postXXXX相关方法,这些和Win32中的道理基本一致,一个为发送后直接返回,一个为处理后才返回。

    Message:消息对象,Message Queue中的存放的对象。一个Message Queue中包含多个Message Message实例对象的取得,通常使用Message类里的静态方法obtain(),该方法有多个重载版本可供选择;它的创建并不一定是直接创建一个新的实例,而是先从Message Pool(消息池)中看有没有可用的Message实例,存在则直接取出返回这个实例。如果Message Pool中没有可用的Message实例,则才用给定的参数创建一个Message对象。调用removeMessages()时,将MessageMessage Queue中删除,同时放入到Message Pool中。除了上面这种方式,也可以通过Handler对象的obtainMessage()获取一个Message实例。

    final boolean

    sendEmptyMessage(int what)

    final boolean

    sendEmptyMessageAtTime(int what, long uptimeMillis)

    final boolean

    sendEmptyMessageDelayed(int what, long delayMillis)

    final boolean

    sendMessage(Message msg)

    final boolean

    sendMessageAtFrontOfQueue(Message msg)

    boolean

    sendMessageAtTime(Message msg, long uptimeMillis)

    final boolean

    sendMessageDelayed(Message msg, long delayMillis)

    MessageQueue

    是一种数据结构,见名知义,就是一个消息队列,存放消息的地方。每一个线程最多只可以拥有一个MessageQueue数据结构。 
    创建一个线程的时候,并不会自动创建其MessageQueue。通常使用一个Looper对象对该线程的MessageQueue进行管理。主线程创建时,会创建一个默认的Looper对象,而Looper对象的创建,将自动创建一个Message Queue。其他非主线程,不会自动创建Looper,要需要的时候,通过调用prepare函数来实现。
     
    java.util.concurrent对象分析

    对于过去从事Java开发的程序员不会对Concurrent对象感到陌生吧,他是JDK 1.5以后新增的重要特性作为掌上设备,我们不提倡使用该类,考虑到Android为我们已经设计好的Task机制,我们这里Android开发网对其不做过多的赘述。

    Task以及AsyncTask

    Android中还提供了一种有别于线程的处理方式,就是Task以及AsyncTask,从开源代码中可以看到是针对Concurrent的封装,开发人员可以方便的处理这些异步任务。 当然涉及到同步机制的方法和技巧还有很多,考虑时间和篇幅问题不再做过多的描述。

  • Curated Home
  • Icon Image: 

  • Game Development
  • Java*
  • Android*
  • Developers
  • Students
  • Android*
  • Viewing all 289 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>