Multimedia Archives | CodeGuru https://www.codeguru.com/multimedia/ Mon, 09 Aug 2021 21:42:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Drawing 3D OpenGL Graphics on Google Maps https://www.codeguru.com/multimedia/drawing-3d-opengl-graphics-on-google-maps/ Wed, 18 May 2016 07:15:00 +0000 https://www.codeguru.com/uncategorized/drawing-3d-opengl-graphics-on-google-maps/ 1. Introduction OpenGL 3D drawing on Google Maps is a concept of fetching a Google map on a separate window application (rather than on a browser) and extend the functionality of the window in such a way that it would provide feasibility to get 3D drawings possible on it. This concept opens the door to […]

The post Drawing 3D OpenGL Graphics on Google Maps appeared first on CodeGuru.

]]>
1. Introduction

OpenGL 3D drawing on Google Maps is a concept of fetching a Google map on a separate window application (rather than on a browser) and extend the functionality of the window in such a way that it would provide feasibility to get 3D drawings possible on it. This concept opens the door to powerful and versatile OpenGL drawings on ubiquitous Google maps.

2. Scenario

In a flight simulator or radar flight detection kind of scenario, a moving air craft is required to be drawn on a map. The air craft has to be independent of the map type. Here, the map type can be many: Road Map Satellite map, Terrain map, Hybrid map, and so forth. The exact position of the air craft with respect to the map latitude, longitude, and its height from the ground are required to be drawn. Sometimes, air craft pitching or rolling movements also need to be drawn.

3. Issues

Google Map and OpenGL both need a windowing system. Qt(5.2), an operating system independent windowing system, has been used.

4. Solution

4.1 Design

Flight- and Google map-related information has been provided through a conf file, where there is an entity called Controller that reads the file and uses another entity, Drawer, to draw it on a window. Drawer is responsible for drawing the Google map and OpenGL primitives. The conf file is a multiple line entry, with each line formatted as <LATITUDE>, <LONGITUDE>, and <ALTITUDE>. The air craft follows each line and shifts its position accordingly. Rolling, pitching, and heading are calculated as per changes in these three parameters.

GMap1
Figure 1: The three parameters of positioning

Further, Drawer can be divided into three different parts:

  • Window
  • Web Drawer
  • OpenGL Drawer

In order for the Controller to use the three modules in Drawer, a mediator is introduced. It mediates between the Controller and the three components of Drawer.

GMap2
Figure 2: Adding a mediator

Here, the drawing engine and view relation is better described in Qt as Graphics View architecture which follows an Observer pattern or MVC architecture. Here, QGraphicsScene with QGraphicsItem behave as data (subjectee) and QGraphicsView behaves as view (observers).

GMap3
Figure 3: The drawing engine and view relation

Here, the mediator (QMainWindow) plays an important role to make the Controller interact with the drawing engine and vice versa.

GMap4
Figure 4: The mediator makes the Controller interact with the drawing engine

4.2 Class Diagram

  • Controller: Airplane, which is a type of flight device, extends the device. The device contains the mediator pointer.
  • View: QGraphicsView (uses QGLWidget as viewport)
  • Data: QGraphicsScene with item as ngrpahicswebview (extends QGraphicsWebView). ngraphicswebview contains the mediator pointer.
  • Mediator: nmainwindow extends QMaiWindow and contains device, ngraphicswebview, QGraphicsView, and QGrahpicsScene.

GMap5
Figure 5: Class diagram

5. How It Works

In Graphics View design, QGraphicsScene is the entity that manages graphics items (QGraphicsItem) that need to be drawn on the graphics view (widget). QGLWidget, being a subclass of QWidget, is made as a widget of the graphics view. QGraphicsWebView, a QGraphicsItem, is sub classed.

QGraphicsScene calls each registered graphics item’s paint method, passing QPainter, Style options, and the widget to be drawn on as arguments.

Sub classed QGrahpicsWebView, being a graphics item (QGraphicsItem), also gives its paint method a chance where it first calls the parent (QGrahpicsWebView) paint method for 2D Web view painting, followed by enabling QGLWidget current context and then calling OpenGL functions.

Thus, 3D is drawn on top of a 2D Web view painting.

6. Program

<<ngrphicswebview.cpp>>
void ngraphicswebview::paint(QPainter *p, const
QStyleOptionGraphicsItem *options, QWidget *widget){
   QGraphicsWebView::paint(p,options,widget);
   p->beginNativePainting();
   mediator->update(this);
   p->endNativePainting();
}

<<nmainwindow.cpp>>
nmainwindow::nmainwindow(QWidget *p):QMainWindow(p){
   gs=new QGraphicsScene(0,0,1024,768);
   gview = new QGraphicsView(gs,this);
   glwidget=new QGLWidget(QGLFormat(QGL::SampleBuffers));
   glwidget->makeCurrent();
   gwv=new ngraphicswebview(this);
   gview->setScene(gs);
   gs->addItem(gwv);
   gview->setViewport(glwidget);
   setCentralWidget(gview);
   .
   .
}

7. Experimental Data

Experimental data is provided on the Google map where the 3D item is an aircraft. It is tried with static images of the Google map and aircraft and later followed by animation.

GMap6
Figure 6: Aircraft, OpenGL 3D drawing

7.1 Static Drawing

Now, a drawing of an 3D OpenGL item on Google Map (ROAD MAP).

GMap7
Figure 7: A Google Map (ROADMAP) with an 3D OpenGL drawing

7.2 Animation

Animation can be tried on a Google map where a 3D image changes its orientation on moving across the Google map making believe that the 3D entity is actually moving. This technique can use the flight simulator programs running on top of Google Map.

GMap8
Figure 8: Air craft climbing, Google Map (SATELLITE)

GMap9
Figure 9: Air Craft descending, Google Map (ROADMAP)

8. Summary

This article is about drawing 3D items on a Google map. At the beginning, the article discussed the possible design that can make the implementation exist in the easiest possible fashion. This article also discussed the MVC and mediator design pattern. A little about Qt Graphics View was also discussed. This article includes two kinds of experimental data—one for static graphics on a Google map and other is animation. Complete source code with conf.txt has been provided.

9. References

  • OpenGL Programming Guide, Version 1.4. Dave, Jackie, Mason, and Tom.
  • Qt 5.2 documentation.

The post Drawing 3D OpenGL Graphics on Google Maps appeared first on CodeGuru.

]]>
Mandelbrot Using C++ AMP https://www.codeguru.com/multimedia/mandelbrot-using-c-amp/ Fri, 27 Jan 2012 17:57:52 +0000 https://www.codeguru.com/uncategorized/mandelbrot-using-c-amp/ It is time to start taking advantage of the computing power of GPUs… A while ago I wrote an article about how to use the Microsoft Parallel Patterns Library (PPL) to render the Mandelbrot fractal using multiple CPU cores. That article can be found here. This new article will make the Mandelbrot renderer multiple times […]

The post Mandelbrot Using C++ AMP appeared first on CodeGuru.

]]>
It is time to start taking advantage of the computing power of GPUs…

A while ago I wrote an article about how to use the Microsoft Parallel Patterns Library (PPL) to render the Mandelbrot fractal using multiple CPU cores. That article can be found here.

This new article will make the Mandelbrot renderer multiple times faster by using a new Microsoft technology called C++ AMP: Accelerated Massive Parallelism, introduced in the Visual Preview.

The code in the previous article showed each line of the fractal immediately after it was calculated. For this article, this is changed. The Mandelbrot image will be rendered completely off-screen, and only shown when the entire image has been calculated. This is to reduce the overhead of displaying the fractal line-by-line, especially with the C++ AMP version, which will be so fast that this overhead could become pretty substantial.

I will also switch to single precision floating point numbers, because the GPU and GPU driver combination on my Windows 7 machine does not support double precision floating point numbers. C++ AMP itself supports both single- and double precision floating point arithmetic. However, whether or not double precision arithmetic works depends on your specific GPU hardware and your specific drivers from your GPU vendor. A side effect of using single precision arithmetic in the Mandelbrot renderer is that the image will get blocky at big zoom levels. At the end of this article, a piece of code is shows how to check if an accelerator in your system supports double precision arithmetic or not. You could use that to decide at runtime whether to use a single precision or double precision implementation. This is left as an exercise for the reader.

Single-Threaded Implementation

All implementations, the single-threaded, the PPL, and the C++ AMP Mandelbrot version, use the following setup:

const int halfHeight = int(floor(m_nBuffHeight/2.0));
const int halfWidth = int(floor(m_nBuffWidth/2.0));
const int maxiter = 1024;
const float escapeValue = 4.0f;
float zoomLevel = float(m_zoomLevel);
float view_i = float(m_view_i);
float view_r = float(m_view_r);
float invLogOf2 = 1 / log(2.0f);
if (m_buffers[m_nRenderToBufferIndex].empty())
    return;
unsigned __int32* pBuffer = &(m_buffers[m_nRenderToBufferIndex][0]);

Here is the single-threaded implementation from my previous article, but updated to use single precision floating point arithmetic, and to render one Mandelbrot image to the buffer before displaying it:

for (int y = -halfHeight; y < halfHeight; ++y)
{
    // Formula: zi = z^2 + z0
    float Z0_i = view_i + y * zoomLevel;
    for (int x = -halfWidth; x < halfWidth; ++x)
    {
        float Z0_r = view_r + x * zoomLevel;
        float Z_r = Z0_r;
        float Z_i = Z0_i;
        float res = 0.0f;
        for (int iter = 0; iter < maxiter; ++iter)
        {
            float Z_rSquared = Z_r * Z_r;
            float Z_iSquared = Z_i * Z_i;
            if (Z_rSquared + Z_iSquared > escapeValue)
            {
                // We escaped
                res = iter + 1 - log(log(sqrt(Z_rSquared + Z_iSquared)))
                    * invLogOf2;
                break;
            }
            Z_i = 2 * Z_r * Z_i + Z0_i;
            Z_r = Z_rSquared - Z_iSquared + Z0_r;
        }

        unsigned __int32 result = RGB(res * 50, res * 50, res * 50);
        pBuffer[(y + halfHeight) * m_nBuffWidth + (x + halfWidth)] =
            result;
    }
}

Multi-Threaded Implementation (PPL)

Parallelizing this implementation using the Microsoft Parallel Patterns Library (PPL) is shown in my previous article. Again, this implementation has been updated to use single precision arithmetic and to render one whole frame to the buffer before displaying it. The updated code is as follows:

parallel_for(-halfHeight, halfHeight, 1, [&](int y)
{
    // Formula: zi = z^2 + z0
    float Z0_i = view_i + y * zoomLevel;
    for (int x = -halfWidth; x < halfWidth; ++x)
    {
        float Z0_r = view_r + x * zoomLevel;
        float Z_r = Z0_r;
        float Z_i = Z0_i;
        float res = 0.0f;
        for (int iter = 0; iter < maxiter; ++iter)
        {
            float Z_rSquared = Z_r * Z_r;
            float Z_iSquared = Z_i * Z_i;
            if (Z_rSquared + Z_iSquared > escapeValue)
            {
                // We escaped
                res = iter + 1 - log(log(sqrt(Z_rSquared + Z_iSquared)))
                    * invLogOf2;
                break;
            }
            Z_i = 2 * Z_r * Z_i + Z0_i;
            Z_r = Z_rSquared - Z_iSquared + Z0_r;
        }

        unsigned __int32 result = RGB(res * 50, res * 50, res * 50);
        pBuffer[(y + halfHeight) * m_nBuffWidth + (x + halfWidth)] =
            result;
    }
});

The post Mandelbrot Using C++ AMP appeared first on CodeGuru.

]]>
Simple C++ MP3 Player Class https://www.codeguru.com/multimedia/simple-c-mp3-player-class/ Mon, 22 Aug 2011 15:53:33 +0000 https://www.codeguru.com/uncategorized/simple-c-mp3-player-class/ If you need to just play MP3s in your application (for example, play a short MP3 during the application splash screen), Mp3 class is a no frills C++ MP3/WMA DirectShow player class, for such simple needs. The original code is from Flipcode‘s contributor, Alan Kemp. The original code needs a bit of tweaking to include […]

The post Simple C++ MP3 Player Class appeared first on CodeGuru.

]]>
If you need to just play MP3s in your application (for example, play a short MP3 during the application splash screen), Mp3 class is a no frills C++ MP3/WMA DirectShow player class, for such simple needs. The original code is from Flipcode‘s contributor, Alan Kemp. The original code needs a bit of tweaking to include the necessary header files and import libraries so that it will compile in Visual Studio 2010. Since this class relies on DirectShow, you need to download the Windows SDK to build it. If you are using Visual Studio 2010, it actually comes with a subset of the Windows SDK, which includes the DirectShow libraries, so you can build this class without downloading anything. You have to call COM’s CoInitialize to initialize COM’s runtime before calling the Load on mp3 file. And you have to also call CoUninitialize at the end of your application, after the Cleanup is called. The header file, Mp3.h is listed below.

cppmp3player
Figure 1: cppmp3player

#define WIN32_LEAN_AND_MEAN             // Exclude rarely-used stuff from Windows headers
// Windows Header Files:
#include <windows.h>
#include <mmsystem.h>
#include <strmif.h>
#include <control.h>

#pragma comment(lib, "strmiids.lib")

class Mp3
{
public:
    Mp3();
    ~Mp3();

    bool Load(LPCWSTR filename);
    void Cleanup();

    bool Play();
    bool Pause();
    bool Stop();

    // Poll this function with msTimeout = 0, so that it return immediately.
    // If the mp3 finished playing, WaitForCompletion will return true;
    bool WaitForCompletion(long msTimeout, long* EvCode);

    // -10000 is lowest volume and 0 is highest volume, positive value > 0 will fail
    bool SetVolume(long vol);

    // -10000 is lowest volume and 0 is highest volume
    long GetVolume();

    // Returns the duration in 1/10 millionth of a second,
    // meaning 10,000,000 == 1 second
    // You have to divide the result by 10,000,000
    // to get the duration in seconds.
    __int64 GetDuration();

    // Returns the current playing position
    // in 1/10 millionth of a second,
    // meaning 10,000,000 == 1 second
    // You have to divide the result by 10,000,000
    // to get the duration in seconds.
    __int64 GetCurrentPosition();

    // Seek to position with pCurrent and pStop
    // bAbsolutePositioning specifies absolute or relative positioning.
    // If pCurrent and pStop have the same value, the player will seek to the position
    // and stop playing. Note: Even if pCurrent and pStop have the same value,
    // avoid putting the same pointer into both of them, meaning put different
    // pointers with the same dereferenced value.
    bool SetPositions(__int64* pCurrent, __int64* pStop, bool bAbsolutePositioning);

private:
    IGraphBuilder *  pigb;
    IMediaControl *  pimc;
    IMediaEventEx *  pimex;
    IBasicAudio * piba;
    IMediaSeeking * pims;
    bool    ready;
    // Duration of the MP3.
    __int64 duration;

};

The original class only has the play, pause and stop functionality. Note: after calling Pause, you have to call Play to resume playing. Since I have a need to loop my music, I need to know when my MP3 has ended, so I added the method, WaitForCompletion to poll periodically whether the playing has ended, and to replay it again. Since the original code always played at full volume, I have also added a method, GetVolume to get volume and another method, SetVolume to set volume. Note: -10000 is the minimum volume and 0 is the maximum volume. If you set any positive volume greater than 0, you will receive an error. You can call GetDuration and GetCurrentPosition to get the duration of the MP3 and the current playing (time) position of the MP3 respectively. These 2 methods return units of 10th millionth of a second(1/10,000,000 of a second): you have to divide by 10,000,000 to get the duration in seconds. The reason I did not return the duration in seconds is because I found that second unit is too coarse grained to do seeking. The source code implementation of Mp3.cpp is listed below.

#include "Mp3.h"
#include <uuids.h>

Mp3::Mp3()
{
    pigb = NULL;
    pimc = NULL;
    pimex = NULL;
    piba = NULL;
    pims = NULL;
    ready = false;
    duration = 0;
}

Mp3::~Mp3()
{
    Cleanup();
}

void Mp3::Cleanup()
{
    if (pimc)
        pimc->Stop();

    if(pigb)
    {
        pigb->Release();
        pigb = NULL;
    }

    if(pimc)
    {
        pimc->Release();
        pimc = NULL;
    }

    if(pimex)
    {
        pimex->Release();
        pimex = NULL;
    }

    if(piba)
    {
        piba->Release();
        piba = NULL;
    }

    if(pims)
    {
        pims->Release();
        pims = NULL;
    }
    ready = false;
}

bool Mp3::Load(LPCWSTR szFile)
{
    Cleanup();
    ready = false;
    if (SUCCEEDED(CoCreateInstance( CLSID_FilterGraph,
        NULL,
        CLSCTX_INPROC_SERVER,
        IID_IGraphBuilder,
        (void **)&this->pigb)))
    {
        pigb->QueryInterface(IID_IMediaControl, (void **)&pimc);
        pigb->QueryInterface(IID_IMediaEventEx, (void **)&pimex);
        pigb->QueryInterface(IID_IBasicAudio, (void**)&piba);
        pigb->QueryInterface(IID_IMediaSeeking, (void**)&pims);

        HRESULT hr = pigb->RenderFile(szFile, NULL);
        if (SUCCEEDED(hr))
        {
            ready = true;
            if(pims)
            {
                pims->SetTimeFormat(&TIME_FORMAT_MEDIA_TIME);
                pims->GetDuration(&duration); // returns 10,000,000 for a second.
                duration = duration;
            }
        }
    }
    return ready;
}

bool Mp3::Play()
{
    if (ready&&pimc)
    {
        HRESULT hr = pimc->Run();
        return SUCCEEDED(hr);
    }
    return false;
}

bool Mp3::Pause()
{
    if (ready&&pimc)
    {
        HRESULT hr = pimc->Pause();
        return SUCCEEDED(hr);
    }
    return false;
}

bool Mp3::Stop()
{
    if (ready&&pimc)
    {
        HRESULT hr = pimc->Stop();
        return SUCCEEDED(hr);
    }
    return false;
}

bool Mp3::WaitForCompletion(long msTimeout, long* EvCode)
{
    if (ready&&pimex)
    {
        HRESULT hr = pimex->WaitForCompletion(msTimeout, EvCode);
        return *EvCode > 0;
    }

    return false;
}

bool Mp3::SetVolume(long vol)
{
    if (ready&&piba)
    {
        HRESULT hr = piba->put_Volume(vol);
        return SUCCEEDED(hr);
    }
    return false;
}

long Mp3::GetVolume()
{
    if (ready&&piba)
    {
        long vol = -1;
        HRESULT hr = piba->get_Volume(&vol);

        if(SUCCEEDED(hr))
            return vol;
    }

    return -1;
}

__int64 Mp3::GetDuration()
{
    return duration;
}

__int64 Mp3::GetCurrentPosition()
{
    if (ready&&pims)
    {
        __int64 curpos = -1;
        HRESULT hr = pims->GetCurrentPosition(&curpos);

        if(SUCCEEDED(hr))
            return curpos;
    }

    return -1;
}

bool Mp3::SetPositions(__int64* pCurrent, __int64* pStop, bool bAbsolutePositioning)
{
    if (ready&&pims)
    {
        DWORD flags = 0;
        if(bAbsolutePositioning)
            flags = AM_SEEKING_AbsolutePositioning | AM_SEEKING_SeekToKeyFrame;
        else
            flags = AM_SEEKING_RelativePositioning | AM_SEEKING_SeekToKeyFrame;

        HRESULT hr = pims->SetPositions(pCurrent, flags, pStop, flags);

        if(SUCCEEDED(hr))
            return true;
    }

    return false;
}

The source code includes a static library project and the DLL project and a demo project, PlayMp3, which plays MP3 with a helper class, CLibMP3DLL, to load the LibMP3DLL.dll at runtime. Usage of CLibMP3DLL is similar to Mp3 class, with additional LoadDLL and UnloadDLL methods to load/unload dll. Below is the header file of CLibMP3DLL.

class CLibMP3DLL
{
public:
    CLibMP3DLL(void);
    ~CLibMP3DLL(void);

    bool LoadDLL(LPCWSTR dll);
    void UnloadDLL();

    bool Load(LPCWSTR filename);
    bool Cleanup();

    bool Play();
    bool Pause();
    bool Stop();
    bool WaitForCompletion(long msTimeout, long* EvCode);

    bool SetVolume(long vol);
    long GetVolume();

    __int64 GetDuration();
    __int64 GetCurrentPosition();

    bool SetPositions(__int64* pCurrent, __int64* pStop, bool bAbsolutePositioning);


private:
    HMODULE m_Mod;
};

Though I may have added a few methods to the Mp3 class, it took quite a bit of effort to get them to run correctly. I hope to pass these time-savings to other developers who simply wants to play a MP3 file, minus the hassle. This class is hosted at Codeplex.

The post Simple C++ MP3 Player Class appeared first on CodeGuru.

]]>
Library for Raw Video Processing https://www.codeguru.com/multimedia/library-for-raw-video-processing/ Tue, 14 Jun 2011 18:32:41 +0000 https://www.codeguru.com/uncategorized/library-for-raw-video-processing/ This project is built to be used as a library to perform image processing on AVI files through exclusive use of Microsoft AVIFile API. The main task this library is built for is defined below: Open AVI file on a disk Read AVI video stream, frame by frame, as raw RGB data Perform some processing […]

The post Library for Raw Video Processing appeared first on CodeGuru.

]]>

This project is built to be used as a library to perform image processing on AVI files through exclusive use of Microsoft AVIFile API. The main task this library is built for is defined below:

  • Open AVI file on a disk
  • Read AVI video stream, frame by frame, as raw RGB data
  • Perform some processing on raw image data
  • Save the resulting video, frame by frame, into another AVI file

The main limitation in this library is that the result of AVI file generation can only be uncompressed video, which is somewhat expensive in terms of disk-space. As an attempt to make it somewhat useful, we add a stand along function to compress the video from one file into another.

The library consists of a single implementation file "RawAVIStream.cpp" and a single header file "RawAVIStream.h". In advance to the core files there is a sample code that uses this library in "test.cpp".

The code is built with "Microsoft Visual Studio 2008, 9.0.30729.1 SP" and "Microsoft Platform SDK for Windows XP SP2". All the relevant files are included, except codecs, MSVCRT redistributable, etc.

Following is an explanation on how the API is used.

Reading from an AVI File

Here is the minimal code needed to open an AVI file and initiate reading raw data:

	// Error handling is intentionally leaved out for clarity
	PAVISTREAM ppavi;
	AVIFileInit();
	AVIStreamOpenFromFile(&ppavi, filename, streamtypeVIDEO, 0, OF_READ, NULL);

	BITMAPINFOHEADER bi;
	long format_length = sizeof(bi);
	AVIStreamReadFormat(ppavi, 0, &bi, &format_length);
	bi.biBitCount = 24;
	bi.biCompression = BI_RGB;
	bi.biSizeImage = 0;
	PGETFRAME getframe = AVIStreamGetFrameOpen(ppavi, avi->bi);

	AVIStreamRelease(ppavi);

The main function in this code block is AVIStreamGetFrameOpen. In case this function runs successfully,
we have a pointer that allows us to read video data, frame by frame, like this:

	void *buffer = AVIStreamGetFrame(avi->getframe, index);
	CopyMemory(target, (BYTE*)buffer + ((BITMAPINFOHEADER*)buffer)->biSize, ((BITMAPINFOHEADER*)buffer)->biSizeImage);

The auxiliary function used is AVIStreamReadFormat. Obviously, we can fill-in BITMAPINFOHEADER without this
function. However, we have to determine the frame size somehow. In this code we use AVIStreamOpenFromFile
because we are only interested in video processing. The alternative to this call is AVIFileOpen/AVIFileGetStream.

Writing into an AVI File

Creating an uncompressed AVI file from a set of raw data frames is performed using the same API:

	// Error handling is intentionally leaved out for clarity
	BITMAPINFOHEADER bi;
	PAVIFILE pavifile;
	AVISTREAMINFO info;
	PAVISTREAM ppavi;
	AVIFileInit();

	// Create empty AVI file
	AVIFileOpen(&pavifile, filename, OF_WRITE | OF_CREATE, NULL);

	// Create a video stream with minimum data needed.
	ZeroMemory(&info, sizeof(info));
	info.fccType = streamtypeVIDEO;
	info.dwScale = 1;
	info.dwRate = 25;	// This is rather arbitrary, although inspired by PAL TV standard
	AVIFileCreateStream(pavifile, &ppavi, &info);
	AVIFileRelease(pavifile);

	// Set data format to raw RGB, compatible to the format specified in RawAVIReader above.
	// Unfortunately, we can't specify MPEG compression, although MPEG schema is designed
	// for streaming. If we do specify compression, it must be done outside somehow.
	// We specify only minimal data needed to render a valid AVI video.
	ZeroMemory(&bi, sizeof(bi));
	bi.biSize = sizeof(bi);
	bi.biWidth = width;
	bi.biHeight = height;
	bi.biPlanes = 1;
	bi.biBitCount = 24;
	bi.biCompression = BI_RGB;
	bi.biXPelsPerMeter = 1000;
	bi.biYPelsPerMeter = 1000;
	AVIStreamSetFormat(ppavi, 0, &bi, sizeof(bi));

After this code is performed, we have a variable ‘ppavi’ pointing into a valid AVI stream
handler. Having this handler, we do:

	BYTE *frameRGB = new BYTE[3 * width * height];
	...
	fillInTheFrame(frameRGB, width, height); // What-ever does it mean
	AVIStreamWrite(ppavi, index, 1, frameRGB,
		3 * width * height,
		0, NULL, NULL);

MPEG Encoder

In advance of the core task of encoding/decoding raw AVI data, this library
implements a stand-alone function that compresses one file into another using
the Microsoft implemented MPEG codec. The choice

options.fccHandler = mmioFOURCC('M','S','V','C');

is taken from MSDN and is thought to be found on most Windows machines. The simplest use of this codec is:

	AVICOMPRESSOPTIONS options;
	PAVISTREAM ppavi;
	AVIStreamOpenFromFile(&ppavi, source, streamtypeVIDEO, 0, OF_READ, NULL);

	memset(&options, 0, sizeof(options));
	options.fccType = streamtypeVIDEO;
	options.fccHandler = mmioFOURCC('M','S','V','C');
	options.dwKeyFrameEvery = 2;
	options.dwQuality = 1;
	options.dwFlags = AVICOMPRESSF_KEYFRAMES;
	AVISave(target, NULL, NULL, 1, ppavi, &options);

The codec to be used with this API must be registered through Video Compression
Manager as defined in Video Compression Manager
and ICInstall Function

The post Library for Raw Video Processing appeared first on CodeGuru.

]]>
How to add DMO in DirectShow filter graph https://www.codeguru.com/multimedia/how-to-add-dmo-in-directshow-filter-graph/ Sat, 11 Dec 2010 00:38:58 +0000 https://www.codeguru.com/uncategorized/how-to-add-dmo-in-directshow-filter-graph/ Introduction. In this article, we’ll see how easily we can add a DMO (Echo) to a DirectShow filter graph. As a result, we’ll get audio with Echo effect. Installed Windows SDK for 7 on Windows 7 and run GraphEdt.exe. Now, select the menu Graph–>Insert Filters. Filter window will pop up, See the Example below: In […]

The post How to add DMO in DirectShow filter graph appeared first on CodeGuru.

]]>

Introduction.



In this article, we’ll see how easily we can add a DMO (Echo) to a DirectShow filter graph. As a result, we’ll get audio with Echo effect. Installed Windows SDK for 7 on Windows 7 and run GraphEdt.exe. Now, select the menu Graph–>Insert Filters. Filter window will pop up, See the Example below:



In this picture, there is a section named “DMO audio effects”. There are some DMOs that we can use to add effect to our audio. In this case, I’ve picked up the DMO ‘Echo’. Now the question is how do I know, my system has the required DMO installed. This problem can be solved if we enemerate system for audio DMOs and then select the desired one if available. The following code section demonstrate how we can enumerate system for DMO.


void EnumAudioDMO()
{
IEnumDMO* pEnum = NULL;
HRESULT hr = DMOEnum(
DMOCATEGORY_AUDIO_EFFECT, // Category
DMO_ENUMF_INCLUDE_KEYED, // Included keyed DMOs
0, NULL, // Input types (don’t care)
0, NULL, // Output types (don’t care)
&pEnum);

if (SUCCEEDED(hr))
{
CLSID clsidDMO;
WCHAR* wszName;
do
{
hr = pEnum->Next(1, &clsidDMO, &wszName, NULL);
if (hr == S_OK)
{
// Now wszName holds the friendly name of the DMO,
// and clsidDMO holds the CLSID.

wprintf(L”DMO Name: %s\n”, wszName);
if(wcscmp(wszName, L”Echo”) == 0)
{
g_clsidDMO = clsidDMO;
g_bFound = TRUE;
}

// Remember to release wszName!
CoTaskMemFree(wszName);
}
} while (hr == S_OK);
pEnum->Release();
}
}

There is DMOEnum API to scan registry for installed DMOs. Then, checked for Echo DMO. Once It’s located then it’s class id can be used to insert the DMO into the DirectShow filter graph. DMO can be used in DirectShow based application but we need to provide a DMO wrapper.

If we know the class identifier (CLSID) of a specific DMO that we want to use, we can initialize the DMO Wrapper filter with that DMO. In the above code snippet we already got to know the CLSID of the target DMO while enumerating the system for DMO. Follwoing part shows how we can create DMO wrapper and insert the DMO in DirectShow filter graph:

1. Call CoCreateInstance to create the DMO Wrapper filter.

2. Query the DMO Wrapper filter for the IDMOWrapperFilter interface.

3. Call the IDMOWrapperFilter::Init method. Specify the CLSID of the DMO and the GUID of the DMO’s category.
4. Finally add the DMO to DirectShow Filter graph.



The following code shows, how we can build a graph with Echo DMO to add echo in audio output:


void BuildFilterGraph()
{
CoInitialize(NULL);

IGraphBuilder *pGraphBuilder = NULL;
IBaseFilter *pFilter = NULL;
IMediaControl *pMediaControl = NULL;
IMediaEventEx *pEvt = NULL;

CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&pGraphBuilder);

pGraphBuilder->QueryInterface(IID_IMediaControl, reinterpret_cast(&pMediaControl));
pGraphBuilder->QueryInterface(IID_IMediaEventEx, reinterpret_cast(&pEvt));

if(g_bFound) // If found then add the Echo DMO to the filter graph
{
HRESULT hr = CoCreateInstance(CLSID_DMOWrapperFilter, NULL,
CLSCTX_INPROC_SERVER, IID_IBaseFilter, reinterpret_cast(&pFilter));

if (SUCCEEDED(hr))
{
// Query for IDMOWrapperFilter.
IDMOWrapperFilter *pDmoWrapper;
hr = pFilter->QueryInterface(IID_IDMOWrapperFilter, reinterpret_cast(&pDmoWrapper));

if (SUCCEEDED(hr))
{
// Initialize the filter.
hr = pDmoWrapper->Init(g_clsidDMO, DMOCATEGORY_VIDEO_EFFECT);
pDmoWrapper->Release();

if (SUCCEEDED(hr))
{
// Add the filter to the graph.
hr = pGraphBuilder->AddFilter(pFilter, L”Echo”);
}
}
}
}

pGraphBuilder->RenderFile(L”C:\\Test.wmv”, NULL);

pMediaControl->Run();

long evCode;
pEvt->WaitForCompletion(INFINITE, &evCode);

pFilter->Release();
pGraphBuilder->Release();
}



The following sample code shows, how we can write DirectShow application and add echo to an existing video without modifying the content.


// DMOEnumeration.cpp : Defines the entry point for the console application.
//

#include “stdafx.h”
#include
#include
#include

CLSID g_clsidDMO;
BOOL g_bFound;

// Enumerate audio DMO and pick Echo, DMO
void EnumAudioDMO()
{
IEnumDMO* pEnum = NULL;
HRESULT hr = DMOEnum(
DMOCATEGORY_AUDIO_EFFECT, // Category
DMO_ENUMF_INCLUDE_KEYED, // Included keyed DMOs
0, NULL, // Input types (don’t care)
0, NULL, // Output types (don’t care)
&pEnum);

if (SUCCEEDED(hr))
{
CLSID clsidDMO;
WCHAR* wszName;
do
{
hr = pEnum->Next(1, &clsidDMO, &wszName, NULL);
if (hr == S_OK)
{
// Now wszName holds the friendly name of the DMO,
// and clsidDMO holds the CLSID.

wprintf(L”DMO Name: %s\n”, wszName);
if(wcscmp(wszName, L”Echo”) == 0)
{
g_clsidDMO = clsidDMO;
g_bFound = TRUE;
}

// Remember to release wszName!
CoTaskMemFree(wszName);
}
} while (hr == S_OK);
pEnum->Release();
}
}

void BuildFilterGraph()
{
CoInitialize(NULL);

IGraphBuilder *pGraphBuilder = NULL;
IBaseFilter *pFilter = NULL;
IMediaControl *pMediaControl = NULL;
IMediaEventEx *pEvt = NULL;

CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER,
IID_IGraphBuilder, (void **)&pGraphBuilder);

pGraphBuilder->QueryInterface(IID_IMediaControl, reinterpret_cast(&pMediaControl));
pGraphBuilder->QueryInterface(IID_IMediaEventEx, reinterpret_cast(&pEvt));

if(g_bFound) // If found then add the Echo DMO to the filter graph
{
HRESULT hr = CoCreateInstance(CLSID_DMOWrapperFilter, NULL,
CLSCTX_INPROC_SERVER, IID_IBaseFilter, reinterpret_cast(&pFilter));

if (SUCCEEDED(hr))
{
// Query for IDMOWrapperFilter.
IDMOWrapperFilter *pDmoWrapper;
hr = pFilter->QueryInterface(IID_IDMOWrapperFilter, reinterpret_cast(&pDmoWrapper));

if (SUCCEEDED(hr))
{
// Initialize the filter.
hr = pDmoWrapper->Init(g_clsidDMO, DMOCATEGORY_VIDEO_EFFECT);
pDmoWrapper->Release();

if (SUCCEEDED(hr))
{
// Add the filter to the graph.
hr = pGraphBuilder->AddFilter(pFilter, L”Echo”);
}
}
}
}

pGraphBuilder->RenderFile(L”C:\\Test.wmv”, NULL);

pMediaControl->Run();

long evCode;
pEvt->WaitForCompletion(INFINITE, &evCode);

pFilter->Release();
pGraphBuilder->Release();
}

int _tmain(int argc, _TCHAR* argv[])
{
g_bFound = FALSE;

EnumAudioDMO();
BuildFilterGraph();

return 0;
}

The post How to add DMO in DirectShow filter graph appeared first on CodeGuru.

]]>
Tip: Detect if a Language Font is Installed (Such as East Asian) https://www.codeguru.com/multimedia/tip-detect-if-a-language-font-is-installed-such-as-east-asian/ Tue, 13 Oct 2009 06:06:36 +0000 https://www.codeguru.com/uncategorized/tip-detect-if-a-language-font-is-installed-such-as-east-asian/ There are times when your application might need to display Japanese fonts or some other East Asian languages, like Chinese or Korean—which are collectively know as CJK in acronym. On a Japanese Windows XP, this is not a problem. On a non-Japanese Windows XP, there is a problem, because the Japanese characters will appear as […]

The post Tip: Detect if a Language Font is Installed (Such as East Asian) appeared first on CodeGuru.

]]>
There are times when your application might need to display Japanese fonts or some other East Asian languages, like Chinese or Korean—which are collectively know as CJK in acronym. On a Japanese Windows XP, this is not a problem. On a non-Japanese Windows XP, there is a problem, because the Japanese characters will appear as squared boxes, so (non-Japanese) users may wonder what the actual language and its content are. A way is needed to detect when the application is running on non-Japanese Windows XP without the East Asian Language Pack installed, and it needs to be able to notify the users that the East Asian Language Pack needs to be installed in order to display Japanese text.

Upon seeing the prompt, the user may choose the install the East Asian Language Pack if they can understand Japanese or they can ignore it ( or not run the application), if they don’t understand Japanese anyway. Please note this is not a problem on Windows Vista and Windows 7 as these two operating systems have all of the common languages installed.

I have searched for the solution on how to detect if an East Asian Language is installed on Windows XP, however I can find none. Instead, I find many useless replies taht say this is a non-issue: because only a Japanese user will run a Japanese application and they will have all the necessary fonts installed. While this is true in most cases, it is not true in all cases. For example, if I have an English OpenGL game demo that happens to display Japanese text in one of the signboards in the scene because it is depicting a busy street in a Japanese City, then I need a way to detect if Japanese language is installed so that I can generate the text glyphs for that scene. If Japanese language is not installed, I can choose to display English text instead.

I found out that there is a API, EnumSystemLanguageGroups function, which can do this. This is the function signature below.

BOOL EnumSystemLanguageGroups(
  __in  LANGUAGEGROUP_ENUMPROC lpLanguageGroupEnumProc,
  __in  DWORD dwFlags,
  __in  LONG_PTR lParam
);

The first parameter, lpLanguageGroupEnumProc, is a callback function, of EnumLanguageGroupsProc type, which is called during system language enumeration. The second parameter, dwFlags, is a flag that can be passed in LGRPID_INSTALLED or LGRPID_SUPPORTED, depending which is the minimum support you want to detect. The third parameter, lParam, is an address of a variable that will be passed into the callback function. You can set the language enumeration result into this variable inside the callback function. Below is a code sample on how to detect if Japanese is installed.

#include <string>
#include <iostream>
#include <Windows.h>
#include <Winnls.h>

BOOL CALLBACK JapaneseEnumLanguageGroupsProc(
    LGRPID LanguageGroup,             // language-group identifier
    LPTSTR lpLanguageGroupString,     // language-group identifier string
    LPTSTR lpLanguageGroupNameString, // language-group name string
    DWORD dwFlags,                    // options
    LONG_PTR  lParam                  // callback parameter
    )
{
    LONG* plLang = (LONG*)(lParam);
    std::wstring strLang=lpLanguageGroupNameString;
    if(L"Japanese"==strLang&&dwFlags==LGRPID_INSTALLED)
    {
        *plLang = 1;
        return FALSE; // Do not enumerate anymore
    }

    return TRUE;
}

bool IsJapaneseLangInstalled()
{
    LONG lLang=0;
    EnumSystemLanguageGroups(
        JapaneseEnumLanguageGroupsProc, // callback function
        LGRPID_SUPPORTED,               // language groups
        (LONG_PTR)&lLang          // callback parameter
        );

    if(lLang==1)
        return true;

    return false;
}

int _tmain(int argc, _TCHAR* argv[])
{
	std::wstring strAns = IsJapaneseLangInstalled() ? L"Yes" : L"No" ;
	std::wcout<<L"Japanese Language Installed : " <<  strAns  << std::endl;

	return 0;
}

Note: For Chinese language, you have to choose whether you want to detect if Traditional Chinese, Simplified Chinese, or both are installed. The sample code to detect CJK is included as a source code download of this article.

The post Tip: Detect if a Language Font is Installed (Such as East Asian) appeared first on CodeGuru.

]]>
Windows Imaging Component https://www.codeguru.com/multimedia/windows-imaging-component/ Thu, 07 May 2009 17:58:18 +0000 https://www.codeguru.com/uncategorized/windows-imaging-component/ Introduction Windows Imaging Component (WIC in short) is the new platform to load, save and convert images between various image formats, including the latest HD Photo format designed and aggressively pushed by Microsoft, to be the JPEG2000 replacement. Unlike JPEG2000 which is plagued by various patents issues, HD Photo standard is a open standard which […]

The post Windows Imaging Component appeared first on CodeGuru.

]]>
Introduction

Windows Imaging Component (WIC in short) is the new platform to
load, save and convert images between various image formats, including
the latest HD Photo format designed and aggressively pushed by
Microsoft, to be the JPEG2000 replacement. Unlike JPEG2000 which is
plagued by various patents issues, HD Photo standard is a open standard
which is free for all to use. HD Photo has a compression rate and
picture qualities better than JPEG and JPEG2000. Windows Imaging
Component is also a platform for programmers to write their own image
codecs for their own image format or RAW images from digital cameras.
The standard codecs, which are provided in the Windows Imaging
Component, are more secure than those provided by GDI+. WIC only
provides ways to load and convert and save images; To display an image
loaded by WIC, you either use Device Independent Bitmaps(DIB) or GDI+.
The sample code provided by Microsoft uses DIBs which are difficult to
use. For this article, we will use GDI+. The advantages of using GDI+
is that you can do drawing or further image processing on the GDI+
image.

Some Pictures

Since no web browser supports displaying of HD photos yet. I have put up some JPEG pictures converted from HD Photos format. Note: there are unavoidable image quality degradation in the conversion because compression is involved twice.

Photo Notes:This is the front view of new office building which my company just moved in barely 2 weeks ago.

Photo Notes:I have the paranormal view, from my desk, which my colleagues envy. This is the left view of my window. The building with the red roof is the train station which I walk 15 minutes from there to office everyday.

Photo Notes:This is the right view of my window where you could see my office is just beside a mega-mart where there are many good restaurants and shops.

Building the Sample Code

Windows Vista and Windows XP SP3 comes with WIC. To get WIC to run my sample projects on your Windows XP (non SP3) PC, you either install Microsoft .NET Framework 3.0 or install the Windows Imaging Component(32 bit) or Windows Imaging Component(64 bit). To get sample code on how to program WIC using .NET or WIC COM interfaces, you can download Windows Imaging Component Sample Source and tools.

To build the sample code presented in this article, you need to download and install Windows SDK update for Vista. Because the wincodec.idl and wincodecsdk.idl
in the sample projects refers to files on my development PC, you need
to remove them from the projects and add them in again with their
locations in the Windows SDK on your development PC.

Using the Code

To load the WIC images into GDI+, we use the LoadHdPhotos class and its static member function, GetImage().

class LoadHdPhotos
{
public:

static bool GetImage(
    CComPtr<IWICImagingFactory> imagingFactory,
    const std::wstring& szFile,
    Gdiplus::Bitmap*& pbmp,
    Gdiplus::Bitmap*& pImageThumb );
};

To use the GetImage() function, you need to create the IWICImagingFactory object first. I chose not to encapsulate this factory object into LoadHdPhotos is because this factory object is meant to be created once and used multiple times when loading and/or saving images.

Let us see some example code of using this function.

m_ImagingFactory.CoCreateInstance(CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER );

if(m_ImagingFactory)
{
    using namespace Gdiplus;
    Bitmap* pImageThumb = NULL;
    bool bRet = LoadHdPhotos::GetImage(
        m_ImagingFactory,
        L"E:\\Media\\MyImage.hdp",
        m_pbmp,
        pImageThumb );

    if( bRet && m_pbmp!= NULL )
    {
        // after getting the GDI+ image, display it or do further processing.
        CClientDC dc(this);
        Graphics graphics(dc.GetSafeHdc());
        graphics.DrawImage(m_pbmp,0,0,m_pbmp->GetWidth(),m_pbmp->GetHeight());

    }
}

After your further processing, you may wish to save the resultant image. To do this you will use the SaveHdPhotos class. The SaveHdPhotos class simply wraps the CImageTransencoder class from the WICExplorer sample code from the WIC sample tools.

SaveHdPhotos save;
save.SetLossless(false);
save.SetCompressionQuality(0.8f);
save.SetImageQuality(0.8f);

save.SetDpi( (double)m_pbmp->GetHorizontalResolution(),
    (double)m_pbmp->GetVerticalResolution() );
save.SetPixelFormat( GUID_WICPixelFormat24bppBGR );

save.Begin(
    L"E:\\Media\\MyImage2.hdp",
    m_ImagingFactory);

if( m_pbmp && pImageThumb )
    save.AddFrame( m_pbmp, pImageThumb );
else if( m_pbmp )
    save.AddFrame( m_pbmp, NULL );

save.End();

Convert Between Images

To convert between different image formats, do not use LoadHdPhotos and SaveHdPhotos because they use GDI+, and “the middle man,” which is a highly inefficient way to convert images. You can use WIC to do it.

class ConvImage
{
public:
    bool ConvertImage(
        CComPtr<IWICImagingFactory> imagingFactory,
        const std::wstring& szSrcFile,
        const std::wstring& szDestFile );
};

if(m_ImagingFactory)
{
    // saving
    ConvImage convImage;

    convImage.SetLossless(false);
    convImage.SetCompressionQuality(0.8f);
    convImage.SetImageQuality(0.8f);

    convImage.ConvertImage(
        m_ImagingFactory,
        L"D:\\Media\\lyf_39.jpg",
        L"D:\\Media\\lyf_39.hdp" );
}

Sample Code

I have included 2 sample projects to demostrate the classes I wrote.
The DateStampImage project opens an image and adds a current date/time
stamp at the top left corner of the image, and you can save it in any
format you want. ImageConvertor is a simple project to convert between
different image formats, using WIC.

References

The post Windows Imaging Component appeared first on CodeGuru.

]]>
Outline Text, Part 1 https://www.codeguru.com/multimedia/outline-text-part-1/ Thu, 30 Apr 2009 17:34:00 +0000 https://www.codeguru.com/uncategorized/outline-text-part-1/ Introduction I am an avid fan of animes(Japanese animations). As I do not understand the Japanese language, the animes which I watched, have English subtitles. These fan-subbed animes have the most beautiful fonts and text. Below is a screenshot of the “Tales of the Abyss”, an anime based on a fantasy game with the same […]

The post Outline Text, Part 1 appeared first on CodeGuru.

]]>
Introduction

I am an avid fan of animes(Japanese animations). As I do not understand the Japanese language, the animes which I watched, have English subtitles. These fan-subbed animes have the most beautiful fonts and text. Below is a screenshot of the “Tales of the Abyss”, an anime based on a fantasy game with the same name.

I was fascinated by the outline text. I searched on the web for an outline text library which allows me to do outline text. Sadly I found none. Those that I found, are too difficult for me to retrofit them to my general purpose and I do not fully understand their sparsely commented codes. I decided to roll up my sleeves to write my own outline text library.

Single Outline

Above is an example outline text. Below is the generic GDI+ code to display such a text, using GraphicsPath class. Generally, to draw outline text, you have to get the text path, then render the outline with the text path and render the text by filling the interior of the path.

using namespace Gdiplus;
Graphics graphics(dc.GetSafeHdc());
graphics.SetSmoothingMode(SmoothingModeAntiAlias);
graphics.SetInterpolationMode(InterpolationModeHighQualityBicubic);

Font font(pDC->GetSafeHdc(), &lf);
GraphicsPath path(FillModeWinding);
FontFamily fontFamily(L"Arial");
StringFormat strformat;
wchar_t buf[] = L"CodeGuru Is The Best!";

// Add the string to the path
path.AddString(buf,wcslen(buf),&fontFamily,FontStyleRegular,36,PointF(10.0f,10.0f),&strformat);

// Draw the outline first
Pen pen(Color(255,0,255),4);
graphics.DrawPath(&pen, &path);
// Draw the text by filling the path
SolidBrush brush(Color(0,128,128));
graphics.FillPath(&brush, &path);

Before you rush to make your own outline library, I have to tell you one pitfall of GDI+. GDI+ cannot handle Postscript OpenType fonts; GDI+ can only handle TrueType fonts. I have searched for a solution and found Sjaak Priester’s Make GDI+ Less Finicky About Fonts. His approach is to parse the font file for its glyph and draw its outline. Sadly, I cannot use his code as his library is using the GNU license as I want to make my code free for all to use. I racked my brains for a solution. Since GDI(not GDI+) can display Postscript OpenType fonts and GDI supports path extraction through BeginPath/EndPath/GetPath. I decided to use just that to get my path into GDI+. Below is the comparison of the GDI+ path and GDI path. Note: Both are rendered by GDI+, it is just that their path extraction is different; one is using GDI+ while the other is using GDI to get the text path.

The top one is using GDI+ path and the bottom one is using GDI path. Looks like GDI paths are inferior and inaccurate. But GDI paths can do rotated text trick, like below, which GDI+ cannot do because GDI GraphicsPath’s AddString takes in a FontFamily object, not a Font object. My OutlineText class provides the GdiDrawString method if you have the need to use PostScript OpenType fonts. The effect below is a Franklin Gothic Demi font, size 36, Italic text rotated 10 degrees anti-clockwise.

Here is how to do text outline using my library.

using namespace Gdiplus;
Graphics graphics(dc.GetSafeHdc());
graphics.SetSmoothingMode(SmoothingModeAntiAlias);
graphics.SetInterpolationMode(InterpolationModeHighQualityBicubic);

OutlineText text;
text.TextOutline(Color(0,128,128),Color(255,0,255),4);
text.DrawString(&graphics,&fontFamily,FontStyleRegular, 36, L"CodeGuru Is The Best!", Gdiplus::Point(10,10), &strformat);

Here is how to do rotated text using GDI

LOGFONT lf;
memset(&lf, 0, sizeof(lf));
lf.lfHeight = -MulDiv(36, pDC->GetDeviceCaps(LOGPIXELSY), 72);
lf.lfWeight = FW_NORMAL;
lf.lfItalic = TRUE;
lf.lfOrientation = 100; // 10 degrees
lf.lfEscapement = 100; // 10 degrees
lf.lfOutPrecision = OUT_TT_ONLY_PRECIS;
wcscpy_s(lf.lfFaceName, L"Arial");

// create and select it
CFont newFont;
if (!newFont.CreateFontIndirect(&lf))
	return;
CFont* pOldFont = pDC->SelectObject(&newFont);

pDC->TextOut(10,80,L"CodeGuru Is The Best!", wcslen(L"CodeGuru Is The Best!"));
// Put back the old font
pDC->SelectObject(pOldFont);

Here is how to do rotated text outline using my library.

using namespace Gdiplus;
Graphics graphics(dc.GetSafeHdc());
graphics.SetSmoothingMode(SmoothingModeAntiAlias);
graphics.SetInterpolationMode(InterpolationModeHighQualityBicubic);

OutlineText text;
text.TextOutline(Color(255,0,0),Color(255,255,255),4);

LOGFONT lf;
memset(&lf, 0, sizeof(lf));
lf.lfHeight = -MulDiv(36, pDC->GetDeviceCaps(LOGPIXELSY), 72);
lf.lfWeight = FW_NORMAL;
lf.lfItalic = TRUE;
lf.lfOrientation = 100; // 10 degrees
lf.lfEscapement = 100; // 10 degrees
lf.lfOutPrecision = OUT_TT_ONLY_PRECIS;
wcscpy_s(lf.lfFaceName, L"Franklin Gothic Demi");

text.GdiDrawString(&graphics,&lf,L"CodeGuru Is The Best!",Gdiplus::Point(10,80));

The post Outline Text, Part 1 appeared first on CodeGuru.

]]>
How to Use a Font Without Installing It https://www.codeguru.com/multimedia/how-to-use-a-font-without-installing-it/ Tue, 14 Apr 2009 17:32:05 +0000 https://www.codeguru.com/uncategorized/how-to-use-a-font-without-installing-it/ Many times, a particular font needs to be used in an applications due to inhouse graphics designer’s font choice. In order for the application to use the fonts, the font needs to be installed using the installer. Too many fonts on the user machine may slow the system down considerably. You can actually get away […]

The post How to Use a Font Without Installing It appeared first on CodeGuru.

]]>

Many times, a particular font needs to be used in an applications due to inhouse graphics designer’s font choice. In order for the application to use the fonts, the font needs to be installed using the installer. Too many fonts on the user machine may slow the system down considerably.

You can actually get away without installing the font: GDI and GDI+ each provide two ways for you, as a programmer, to add a font for an application to use without installing it. I’ll show you how in this article!

GDI’s AddFontResourceEx and AddFontMemResourceEx

AddFontResourceEx

Let me first talk about GDI’s two functions for adding fonts to application for use. I’ll then talk about GDI+’s own functions. You can use AddFontResourceEx to add a physical font file for the application to use.

int AddFontResourceEx(
  LPCTSTR lpszFilename, // font file name
  DWORD fl,             // font characteristics
  PVOID pdv             // reserved
);

Here is an example on how to use AddFontResourceEx.

CString szFontFile = "D:\\SkiCargo.ttf";

int nResults = AddFontResourceEx(
	m_szFontFile, // font file name
	FR_PRIVATE,             // font characteristics
	NULL);

To use the font you’ve added, just specify its name in the CreateFont or CreateFontIndirect function like any other installed font. To know the name of the font, just right click on the ttf extension file in the Windows Explorer and select “Open” and you will see its actual name. Or you can use the TTF and TTC class which I wrote. to know its font name

Note: The font filename(“SkiCargo.ttf”) in this article is actually its font name, “SkiCargo”, this is usually not the case! To be on the safe side, use the right click method or TTF and TTC class, I just mentioned, to find out its name!

CClientDC dc(this);

dc.SetBkMode(TRANSPARENT);

LOGFONT lf;
memset(&lf, 0, sizeof(lf));
lf.lfHeight = -MulDiv(24, pDC->GetDeviceCaps(LOGPIXELSY), 72);
lf.lfWeight = FW_NORMAL;
lf.lfOutPrecision = OUT_TT_ONLY_PRECIS;
wcscpy_s(lf.lfFaceName, L"SkiCargo");

// create and select it
CFont newFont;
if (!newFont.CreateFontIndirect(&lf))
	return;
CFont* pOldFont = dc.SelectObject(&newFont);

// use a path to record how the text was drawn
wchar_t buf[] = _T("The quick brown fox jumps over the lazy dog!");
dc.TextOut( 10, 10, buf, wcslen(buf));

// Put back the old font
dc.SelectObject(pOldFont);

You must remember to call RemoveFontResourceEx before the application exits. You should note that the parameters must be the same as the ones that you fed into AddFontResourceEx!

BOOL RemoveFontResourceEx(
  LPCTSTR lpFileName,  // name of font file
  DWORD fl,            // font characteristics
  PVOID pdv            // Reserved.
);
CString szFontFile = "D:\\SkiCargo.ttf";

BOOL b = RemoveFontResourceEx(
	m_szFontFile,  // name of font file
	FR_PRIVATE,            // font characteristics
	NULL            // Reserved.
	);

AddFontMemResourceEx

If our font is in a resource dll, cabinet file or archival compressed file, you can extract it into the memory and then use AddFontMemResourceEx to read it from the memory.

HANDLE AddFontMemResourceEx(
  PVOID pbFont,       // font resource
  DWORD cbFont,       // number of bytes in font resource
  PVOID pdv,          // Reserved. Must be 0.
  DWORD *pcFonts      // number of fonts installed
);

Here is an example on how to use AddFontMemResourceEx on a font file embedded in the resource.

HINSTANCE hResInstance = AfxGetResourceHandle( );

HRSRC res = FindResource(hResInstance,
	MAKEINTRESOURCE(IDR_MYFONT),L"BINARY");
if (res)
{
	HGLOBAL mem = LoadResource(hResInstance, res);
	void *data = LockResource(mem);
	size_t len = SizeofResource(hResInstance, res);

	DWORD nFonts;
	m_fonthandle = AddFontMemResourceEx(
		data,       // font resource
		len,       // number of bytes in font resource
		NULL,          // Reserved. Must be 0.
		&nFonts      // number of fonts installed
		);

	if(m_fonthandle==0)
	{
		MessageBox(L"Font add fails", L"Error");
	}
}

To use the font you have added, please refer to the previous AddFontResourceEx example. They are the same. Just use it like any other installed font. You should call RemoveFontMemResourceEx before the application exits. When the process goes away, the system will unload the fonts, even if you don’t call RemoveFontMemResourceEx. Note: The parameters must be the same as the ones you feed into AddFontResourceEx!

BOOL RemoveFontMemResourceEx(
  HANDLE fh   // handle to the font resource
);
if(m_fonthandle)
{
	BOOL b = RemoveFontMemResourceEx(m_fonthandle);
	if(b==0)
	{
		MessageBox(L"Font remove fails", L"Error");
	}
}

GDI+’s PrivateFontCollection::AddFontFile and PrivateFontCollection::AddMemoryFont

PrivateFontCollection’s AddFontFile

For GDI+, you can use its PrivateFontCollection class member, AddFontFile to add a physical font file.

Status AddFontFile(const WCHAR* filename);

Here is how to use AddFontFile to add a font file.

Gdiplus::PrivateFontCollection m_fontcollection;
//...
CString szFontFile = szExePath + L"SkiCargo.ttf";

Gdiplus::Status nResults = m_fontcollection.AddFontFile(szFontFile);

Here is how to use the font we have just added to PrivateFontCollection object, m_fontcollection.

// When painting the text
FontFamily fontFamily;
int nNumFound=0;
m_fontcollection.GetFamilies(1,&fontFamily,&nNumFound);

if(nNumFound>0)
{
	Font font(&fontFamily,28,FontStyleRegular,UnitPixel);

	StringFormat strformat;
	wchar_t buf[] = L"The quick brown fox jumps over the lazy dog!";
	graphics.DrawString(buf,wcslen(buf),&font,PointF(10.0f,10.0f),&strformat,&brush);
}

Note: unlike the GDI’s AddFontResourceEx and AddFontMemResourceEx, there is no RemoveFontFile for AddFontFile. All added fonts will be removed by PrivateFontCollection’s destructor.

PrivateFontCollection’s AddMemoryFont

For GDI+, you can use its PrivateFontCollection class member, AddMemoryFont to add a font in memory.

Status AddMemoryFont(const VOID *memory, INT length);

Here is how to use AddMemoryFont on a font file embedded in the resource. Similar to AddFontFile, there is no RemoveMemoryFont to call. Everything will be taken care of by PrivateFontCollection’s destructor.

HINSTANCE hResInstance = AfxGetResourceHandle( );

HRSRC res = FindResource(hResInstance,
	MAKEINTRESOURCE(IDR_MYFONT),L"BINARY");
if (res)
{
	HGLOBAL mem = LoadResource(hResInstance, res);
	void *data = LockResource(mem);
	size_t len = SizeofResource(hResInstance, res);

	Gdiplus::Status nResults = m_fontcollection.AddMemoryFont(data,len);

	if(nResults!=Gdiplus::Ok)
	{
		MessageBox(L"Font add fails", L"Error");
	}
}

As to how to use the font you have just added to PrivateFontCollection object, m_fontcollection, please refer to the previous AddFontFile example, they are the same.

The post How to Use a Font Without Installing It appeared first on CodeGuru.

]]>
AL 3D Audio and Environmental Audio Extension https://www.codeguru.com/multimedia/al-3d-audio-and-environmental-audio-extension/ Mon, 25 Jun 2007 22:16:00 +0000 https://www.codeguru.com/uncategorized/al-3d-audio-and-environmental-audio-extension/ 3D Audio and Environmental Audio Extension: Using AL’s Support for 3D Audio and EAX Introduction Games and sophisticated applications require the use of a 3D Audio facility to place their gamers or users in a 3D Space. This results in a virtual world in which the user is immersed, resulting in an improved user response. […]

The post AL 3D Audio and Environmental Audio Extension appeared first on CodeGuru.

]]>
3D Audio and Environmental Audio Extension: Using AL’s Support for 3D Audio and EAX

Introduction

Games and sophisticated applications require the use of a 3D Audio facility to place their gamers or users in a 3D Space. This results in a virtual world in which the user is immersed, resulting in an improved user response. Added to this is a technology by Creative Labs, the EAX or Environmental Audio Extension, which makes the virtual world more rich with respect to 3D sound.

With this simple introduction, you experience the power of 3D sound and EAX and implement these by using the AL SDK.

Requirements

  1. Download the SDK runtime and the SDK from http://streetx.freespaces.com.
  2. Install the SDK runtime on your system.
  3. Install the SDK as directed so that it can run with VC++ (6 or .NET) or BC++ (32 bit).

An Introduction to 3D Audio

Audio in 3D enables a feeling of reality in Sound. To bring about this reality to the sound, you require two attributes:

  1. Attribute of the Listener.
  2. Attribute of the Source.

Attribute of listener means vectors that specify the position of the listener (the gamer or the user) in 3D space along with the relative velocity of motion. Other attributes, such as Doppler shift, Roll off factor, Max and Min hearing distance, FRONT, and UP vector of the listener are also required.

Attribute of the source means the values that determine the characteristics of the sound source. The most important values for these are the vectors for position of source and the relative velocity. Other values can be the cone angle and orientation.

This article will discuss the placement of sound in 3D space with the position and velocity attribute of each. For a full discussion about the attributes, please refer to the help file for the SDK.

Relative Velocity: The Term

The relative velocity of an object may be defined as the (absolute) velocity of the object with respect to the observer, when the observer is stationary. The velocity of the listener and source can be independently specified, yet the term is relative velocity. This is because the effect achieved by setting the velocity of the source and listener independently also can be achieved by setting the velocity of listener to ZERO and the source to the relative velocity of source with respect to listener, and vice-versa. And, because nothing is absolute in space, the term relative provides a better approach.

Steps to Coding

  1. Initialize the AL system.
  2. Create an AL context and set the context for current use.
  3. Open a wave buffer with 3D support and, before using it, set it as the current audio source.
  4. Play the wave.
  5. Perform positioning of listener (it can be done anytime after setting a current context as it has nothing to do with opening of wave or the like).
  6. Position the source in the listener’s space.

After youe are done using the program…

  1. Stop the wave.
  2. Close the wave.
  3. Delete the AL context. !!!Use the ALDelete() function!!!
  4. Uninitialize the AL system.

Program Name

The source code of the program and a DEMO EXE are attached.

  • Source Code Archive: CG_Audio3D.zip
  • EXE Demo Archive: CG_Audio3D Demo.zip

Project/EXE Dependence

This project implements functions from AL and a few functions from the GL library. This article will not discuss the functions from GL SDK but only those from AL SDK.

Excerpt from the Code

Because this article deals with 3D location of sound, I will assume that the reader understands the code dealing with initializating the AL system and de-initializating and creating of the AL context by downloading the souce.

The IDs used in the code are defined as follows:

  • ID_WAVE: Points to the Wave buffer to be used.
  • ID_FONT: Points to the Font resource to be used to display result on GLX Window.
  • ID_TEXTURE: The 2D surface that is used to display 2D graphics on GLX Screen.

Opening Audio with 3D Support

if(!ALOpenWave( ID_WAVE,     //Wave resource ID
   // Wave file name (NULL terminated string)
   filename,
   // We need 3D support on the Wave Resource
   TRUE,
   // Wave data will be streamed ino wave buffer
   TRUE,
   // Wave will be audible even if the window loses focus
   TRUE,
   // We are not concerned about EAX now
   FALSE ))
{
   /* TODO : FILE failed to open.
    * Deal with error
    */
}

Setting 3D Audio Attributes (Position and Velocity Vectors)

ALbool ALApply3D();

This function is very important. Whenever 3D parameters of the source and/or listener are changed, this function must be called for the changes in the 3D to take effect. It must be called after the listener and source 3D attributes are all set. It can be called after a batch of changes so that a more efficient program is created.

ALbool ALSet3DPosition( Vector *vec);

This function will take a vector that contains the position vector of the listener. By default, the listener is placed at Vector(0,0,0).

ALbool ALSet3DVelocity( Vector *vec);

This function will take a vector that contains the velocity vector of the listener. By default, the listener is stationary; in other words, Vector(0,0,0).

Now, the wave ID_WAVE is selected as the current wave buffer by calling ALSetWave(ID_WAVE). Then, afterward (because you know that your wave is 3D enabled), you can set the 3D Source attributes.

ALbool ALSetWave3DPosition(Vector *vec);

This function sets the position vector of the source in 3D space. By default, the source is set at the origin, Vector(0,0,0).

ALbool ALSetWave3DVelocity(Vector *vec);

This function sets the velocity vector of the source in 3D space. By default, the source is stationary; in other words, Vector(0,0,0).

Note: This is a general and simple 3D Audio Program; the source and demo can be viewed for results and understanding. Next, I will explain the Creative EAX effect on 3D Audio Resource. You should remember that the next program being discusses is an extension of the 3D Audio program; there is no change to whatever is discussed here, but only an addition of Creative EAX functionality.

The post AL 3D Audio and Environmental Audio Extension appeared first on CodeGuru.

]]>