SOAP Archives | CodeGuru https://www.codeguru.com/soap/ Wed, 13 Jul 2022 19:07:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Introduction to Web Components API https://www.codeguru.com/soap/web-components-api/ Wed, 13 Jul 2022 19:07:29 +0000 https://www.codeguru.com/?p=19341 In this web development tutorial, we are going to look at web components web developers can leverage to create better maintainability of their codebases, provide high-quality user experiences, and avoid the fate of their web applications becoming obsolete due to outdated technologies or coding principles. This tutorial assumes that readers are at least aware of […]

The post Introduction to Web Components API appeared first on CodeGuru.

]]>
API Tutorial

In this web development tutorial, we are going to look at web components web developers can leverage to create better maintainability of their codebases, provide high-quality user experiences, and avoid the fate of their web applications becoming obsolete due to outdated technologies or coding principles.

This tutorial assumes that readers are at least aware of web fundamentals such as the basics of programming in HTML, CSS, and JavaScript. However, if you do not have basic knowledge of these technologies, you can still build custom elements, which makes it easy for you to learn about simplifying the complexities of front-end web applications.

Additionally, if you would like to learn web development and HTML in more of a classroom setting, we have a list of some of the Best Online Courses to Learn HTML and Web Development to get you started.

What are Web Components?

In any typical front-end framework, such as Angular, we usually put components that share similar functionalities into a module. However, these components have a drawback – they are dependent on the underlying library or framework, which means if you remove the framework from your project, the components will no longer work. You may be wondering if there is a common component that does not have a dependency on any technology – yes, there are; we have common components that can be used across the range of technologies known as web components. Web components are HTML elements that leverage the power to create new HTML tags and extend existing HTML tags or other components created by another developer.

Since the advent of front-end frameworks, component-based web development has been steadily on the rise. Today, web components are a part of more than 10% of all the pages that load in a browser. Tech giants like Google, Facebook, and Microsoft incorporate web components into their technologies and frameworks. JavaScript frameworks, such as Angular, Next.js, Vue, and React are already making use of web components as well.

You can learn more about the different web development frameworks by reading our tutorial: Best JavaScript Frameworks for Web Developers.

What are the Features of Web Components

Below are some of the features web components bring to the table for web developers and web applications:

  • W3C is continuously working on web component specifications to extend its scope. Programmers will be able to create more common components and since those components do not rely on a framework or library, developers can save time and effort when choosing the right web framework; all programmers need to do is add some HTML, CSS, and JavaScript code to build native web components.
  • Web Components can be used with any JavaScript library or framework that is compliant with HTML. They can be easily created using the browser’s API and do not have a dependency on any third-party library or framework.
  • Web components are non-intrusive. That means they are well organized and have their styling and structure. They do not interfere with any other component or code on the page.

Read: Top Tools for Web Developers

What is the Web Components API?

The power of web components lies in their API. It brings the power to create reusable components with nothing more than HTML, CSS, and vanilla JavaScript. The API uses four web standards as a way of creating reusable components. They are listed below:

  • Custom elements
  • HTML templates
  • ES modules
  • Shadow DOM

Let’s take a more detailed look at each of the web standards that make up the web components API.

Custom Elements

Custom elements are like any HTML elements, such as <p> or <article> tags. Through browser API, we can also create elements ourselves. The custom elements created are just like HTML elements and are enclosed in angular brackets. The name of the custom elements should have a dash in them – for example: <page-header> or <stories-section>.

It should be noted that, to prevent conflict in the future with the names of custom elements, browser vendors have committed to not creating new built-in elements that have dashes within their names.

Custom elements have their own set of semantics, mark-up, and behaviors that you can share across multiple libraries, frameworks, and browsers.

Let’s create a custom element using the following code example:

class ComponentDemonstation extends HTMLElement {
  connectedCallback() {
    this.innerHTML = `<h3>This is a demonstration of Custom elements</h3>`;
  }
}   
customElements.define('component-demo', ComponentDemonstation);

On the HTML page, place the following tag:

<my-component></my-component>

In the example above, we have defined an HTML element known as <component-demo>. As you can see, while defining the custom element, we used the extend keyword. The extend keyword is used to register an element with the browser. Note that all custom elements you create must extend HTMLElement.

Read: Tips to Optimize Website Performance

HTML Templates

HTML5 templates are an excellent way to build reusable layouts using markup. HTML5 templates are a “template” of code that you can put inside an HTML page to render whenever the code is required.

Consider the following code example:

const fragment = document.getElementById('item-template');
const items = [
  { name: 'Bicycle', quantity: 2 },
  { name: 'Dart', quantity: 1 },
  { name: 'Sports Shoes', quantity: 1 }
];

items.forEach(item => {
  // Creating an instance of the template
  const instance = document.importNode(fragment.content, true);
  // Adding relevant content to the template
  instance.querySelector('.name').innerHTML = item.name;
  instance.querySelector('.quantity').innerHTML = item.quantity;
  // Appending the instance of the DOM
  document.getElementById(items).appendChild(instance);
});

In the above code example, we have created a template using <template id=”item-template”>. To consume our newly created template, we can use the following HTML code snippet:

<template id="item-template">
  <li><span class="name"></span> &mdash; <span class="quantity"></span></li>
</template>
<ul id="items"></ul>

    The above code demonstrates how an HTML template can consume a script to tell the browser what to do with it.

    ES Modules API

    Before the ES Modules were introduced, JavaScript did not have any convention to use modules. You can think of a <module> as a collection of features you can reuse in other codebase files. Developers are then required to use <script> tags whenever they want to load JavaScript files into their programs and web applications.

    The ES Modules API introduced the standard way for bundling a collection of features into a library that can be reused in other JavaScript files at some point of time in the future.

    Shadow DOM

    Although the shadow DOM API is not required to create web components, it is a powerful tool for scope customization for individual custom elements.

    As the name implies, a shadow DOM creates a separate DOM tree inside the element it is attached to. In other words, anything inside of the document’s scope is referred to as light DOM and anything inside the shadow root is called shadow DOM.

    Shadow DOM functions like <iframe>, where its content is isolated from the rest of the document. The shadow DOM protects its content from the surrounding variables. It also prevents leakage of CSS and JavaScript code from and into a custom element. Let’s see how you can set-up a shadow DOM. You can attach shadow DOM on any HTML element using the attachShadow() method. Below is a code example illustrating setting up a shadow DOM:

    <div>
      <div id="shadowRootDemo"></div>
      <button id="button">Banana</button>
    </div>

    To attach the shadow root to the above node, use the following script:

    const shadowRoot = document.getElementById('shadowRootDemo').attachShadow({ mode: 'open' });
    shadowRoot.innerHTML = `<style>
    button {
      color: orange;
    }
    </style>
    <button id="button">This will use the CSS color: orange <slot></slot></button>`;

    If you have noticed here, we have used the <slot> element to include the content from the containing document in the shadow root. At the point where you would use the <slot> element, it will drop the user content from the containing document at that designated point.

    Browser Support

    An important thing to consider while working with the web components API is keeping browser compatibility in mind. The Web Component APIs that we have used in this article are fully supported in all modern browsers, such as Firefox, Chrome, and Edge. The exceptions, however, are Internet Explorer 11 and Safari. Microsoft ended support for IE 11 in August 2021, Safari does not support web components, and Apple has no plans yet to support web components in the coming future, so polyfills must be used for these two browsers to fill the browsers’ capabilities gap.

    Final Thoughts on Web Components API

    There is no doubt that web components are going to play a vital role in the development of front-end applications. Web components are already being adopted by large tech companies like Google, Facebook, etc. as their preferred framework and technology. The Accelerated Mobile Pages (AMP) technology and Google’s Polymer framework are examples of the ever-increasing use of web components in the software industry. The Web Components Specification APIs are here to stay longer and continue to grow and evolve as the needs of developers evolve.

    Read: Productivity Tools for .NET Developers

    The post Introduction to Web Components API appeared first on CodeGuru.

    ]]>
    From SOAP to REST to GraphQL: API Deep Dive https://www.codeguru.com/soap/soap-rest-graphql-api/ Tue, 14 Jun 2022 17:29:23 +0000 https://www.codeguru.com/?p=19307 When working with APIs, developers have several choices as far as protocols and specifications are concerned. You can use SOAP, REST, or GraphQL – three of the most popular approaches to build APIs and define the semantics and syntax of the messages that would be transferred over the wire. Read: Productivity Tools for .NET Developers […]

    The post From SOAP to REST to GraphQL: API Deep Dive appeared first on CodeGuru.

    ]]>
    API Tutorial

    When working with APIs, developers have several choices as far as protocols and specifications are concerned. You can use SOAP, REST, or GraphQL – three of the most popular approaches to build APIs and define the semantics and syntax of the messages that would be transferred over the wire.

    Read: Productivity Tools for .NET Developers

    SOAP is a long-established and well-known protocol that is popular primarily due to its simplicity and ease of use. REST is another approach for building lightweight, HTTP-based APIs that has gained popularity due to its flexibility. GraphQL is the newest of these three and promises to provide an even more flexible way of working with data.

    In this API programming tutorial, we will take a deep dive into SOAP, REST, and GraphQL and how they compare against each other.

    What is Simple Object Access Protocol (SOAP)?

    Simple Object Access Protocol (SOAP) is a widely popular protocol that uses XML for data exchange. It is an XML-based protocol that was created in the late 1990s for exchanging data in a distributed and decentralized manner.

    The primary distinction between SOAP and REST is that the former focuses on verbs while the latter focuses on resources. REST defines certain constraints and uses a consistent interface to work on them using the HTTP verbs. SOAP is easy to use and understand, which makes it a good choice for many developers.

    Example of a SOAP Message

    A SOAP message is encoded as an XML document and comprises the following elements:

    • SOAP messages include an envelope element that specifies the start and end of the message
    • An optional element that is used to define the header of the SOAP message
    • A mandatory body that represents the body of the SOAP message
    • An optional fault element that is used to represent any errors that might occur while processing the SOAP message

    Here is an example of how a SOAP message format typically looks:

    <SOAP: Envelope>
      <SOAP: Header>
      </SOAP: Header>
      <SOAP: Body>
      </SOAP: Body>
    </SOAP: Envelope>
    

    Read: Creating SOAP Web Services with JAX-WS

    What are the Benefits of SOAP?

    Distributed Applications are built to support heterogeneous platforms. They require a standard data exchange format for exchanging data between homogeneous and heterogeneous platforms. Remember, the technologies used for data exchange before the advent of SOAP were DCOM, RPC, IIOP, etc., which were constrained to a homogenous platform only. Here is where Simple Object-Oriented Protocol (SOAP) comes into play.

    What is Representational State Transfer (REST)?

    Representational State Transfer (REST) is an architectural paradigm for building high-performance, scalable services. In other words, it is an architectural style for designing web services. It is based on the idea of resource-oriented architecture, where resources are identified by their URI (Uniform Resource Identifier).

    In other words, it is based on the idea of resources, which are identified by URLs. These resources can be manipulated using a set of well-defined operations that include GET, POST, PUT, and DELETE.

    REST defines certain constraints to help build the design and architecture of modern-day web applications that run over Http. These constraints include the following:

    • Uniform interface
    • Cacheable
    • Stateless
    • Layered System
    • Client-Server Architecture

    A RESTful Web Service adheres to the REST design constraints mentioned above. The following code snippet illustrates a typical RESTful service:

    [Route("api/[controller]")]
    [ApiController]
    public class UserController : ControllerBase
    {
        [HttpGet]
        public List Get()
        {
          //Write your code here to return all user records
        }
        [HttpGet("{id}", Name = "Get")]
        public string Get(int id)
        {
          //Write your code here to return an user record based on the id
        }
        [HttpPost]
        public void Post([FromBody] UserDTO user)
        {
          //Write your code here to insert an user record to the database
        }
        [HttpPut("{id}")]
        public void Put(int id, [FromBody] UserDTO user)
        {
            //Write your code here to update an user record
        }
        [HttpDelete("{id}")]
        public void Delete(int id)
        {
           //Write your code here to delete an user record
        }
    }
    

    What are the Benefits of REST?

    One of the main benefits of REST is that it enables developers to create web services that are scalable and easy to maintain. REST also allows for a greater degree of flexibility when it comes to how data is represented and accessed.

    The REST architectural style has swiftly gained popularity across the globe for creating and architecting applications that use the HTTP protocol. Because of its simplicity, REST has achieved considerable popularity globally in place of SOAP and WSDL-based web services.

    You can learn more about the REST in our tutorial: An Introduction to Representation State Transfer (REST).

    What is GraphQL?

    GraphQL is a modern query language developed by Facebook for working with APIs. It provides a more flexible way to query data than REST and SOAP, and it is slowly becoming the standard for API development. GrahQL addresses the over-fetching and under-fetching problems often encountered in RESTful applications.

    GraphQL is a powerful query language that can be used to access data from any API. It is based on the concept of a graph, which is a collection of nodes (vertices) and edges (links). You can leverage GraphQL to query data from any data source, including databases, file systems, and even other APIs.

    The following code snippet illustrates how you can use GraphQL in C#:

        public class UserDTO
        {
            [GraphQLType(typeof(NonNullType))]
            public int Id { get; set; }
            [GraphQLNonNullType]
            public string UserName { get; set; }
            [GraphQLNonNullType]
            public string Password { get; set; }
        }
    

    Here’s the GraphQl query to retrieve all users:

    query {
      allUsers{
        id
        firstName
        lastName
      }
    }
    

    When Should You Use SOAP, REST, or GraphQL?

    There are a few key differences between SOAP, REST, and GraphQL that will help you decide which one to use for your project.

    Protocol

    Any transport protocol can be used to access a SOAP-based service, including TCP/IP, UDP, SMTP, and HTTP. Thus, SOAP-based services are not restricted to the HTTP protocol. REST is defined as an architectural style based on the concept of resources and uses HTTP for its communication protocol.

    Message Format

    SOAP is a protocol that uses XML for its message format. This makes it very verbose and can be difficult to parse. Hence, it also means that SOAP messages are self-describing, making them ideal for use in distributed environments.

    REST (Representational State Transfer) is a more modern approach that uses JSON for its message format. REST messages are usually represented as JSON but they are not self-describing, making them less ideal for use in distributed environments.

    Mobile Apps

    Some of the benefits of SOAP include language and platform neutrality, and simplicity. However, SOAP is not well suited for mobile applications since SOAP messages are verbose and convoluted.

    GraphQL is a newer technology that was developed by Facebook and uses JSON for its message format. GraphQL messages are self-describing, making them ideal for use in distributed environment. GraphQL can be used to retrieve data from multiple sources at once making it well suited for mobile applications that often need to load data from multiple API’s.

    REST is light-weight since it uses JSON format for data exchange making it well suited for mobile applications.

    Performance

    In terms of performance, GraphQL is the clear winner amongst these three. It is able to fetch data much faster than both SOAP and REST. This is because it only fetches the data that is requested, rather than loading an entire dataset. In other words, it allows you to query exactly the data that you need – nothing more or less. This can be a huge benefit when working with large amounts of data, as it can help to reduce the amount of data that needs to be transferred.

    Final Thoughts on SOAP, REST, and GraphQL

    So, which of the three should you use for your web service? If you need maximum compatibility, then SOAP is the way to go. In general, you should use SOAP when you need a very robust and self-describing message format.

    REST APIs are the most popular choice for public APIs, while SOAP and GraphQL are better suited for private or enterprise applications. If you want better performance and scalability, then REST or GraphQL would be a better choice.

    Read: Best Practices to Design RESTful APIs

    The post From SOAP to REST to GraphQL: API Deep Dive appeared first on CodeGuru.

    ]]>
    Implement Swagger In ASP.NET Web API https://www.codeguru.com/soap/swagger-asp-net/ Sun, 14 Nov 2021 02:58:00 +0000 https://www.codeguru.com/?p=18680 Swagger is a powerful representation of the RESTful API. It is a language-agnostic specification for describing REST APIs. Swagger is also referred to as the OpenAPI specification. Most developers use Swagger with the .NET API project to get interactive API documentation. In this article, we will create an ASP.NET Web API Application and add the […]

    The post Implement Swagger In ASP.NET Web API appeared first on CodeGuru.

    ]]>
    Swagger is a powerful representation of the RESTful API. It is a language-agnostic specification for describing REST APIs. Swagger is also referred to as the OpenAPI specification. Most developers use Swagger with the .NET API project to get interactive API documentation. In this article, we will create an ASP.NET Web API Application and add the Swagger NuGet package to publish a Swagger enabled API.

    How to Create a Web API Project

    In this section, we will create a web API project in Visual Studio. To begin, open Visual Studio and create a new project, as shown in the image below:

    Swagger Tutorial ASP.Net

    Visual Studio Create New Project

    Select “ASP.NET Web Application” template, as depicted in the figure below:

    How to Create a Web API Project

    ASP.NET Web Application Template

    Next, name the project. For our example, we have created it with the name SampleSwaggerAPI. Now, click OK, as shown here:

    Swagger API Tutorial

    SampleSwaggerAPI Project

    In the next screen – the Create a new ASP.Net Web Application page – select Web API Project and make sure it is empty. Refer to the image below:

    Create ASP.Net Web API

    Web API Empty Project

    Wait for the project to be created; it may take a few minutes. Next, we will add a new controller as seen in the image here:

    Adding Web API Controller

    New Controller Added

    Add the following ASP.Net code in the new AuthenticationController.

    using System.Web.Http;
    using Newtonsoft.Json.Linq;
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Net;
    using System.Net.Http;
    using System.Web.Http;
    
    namespace SampleSwaggerAP.Controllers
    {
        public class AuthenticationController : ApiController
        {
            [HttpPost]
            [Route("Authentication")]
            public JObject authenticationService([FromBody] JObject authenticationJson)
            {
                JObject retJson = new JObject();
                string username = authenticationJson["username"].ToString();
                string password = authenticationJson["password"].ToString();
                if (username == "user" && password == "user")
                {
                    retJson.Add(new JProperty("authentication ", "successful"));
                }
                else
                {
                    retJson.Add(new JProperty("authentication ", "unsuccessful"));
                }
                return retJson;
            }
        }
    }
    

    Once the controller has been created, we will add Swagger into our project. For that, we need to add a NuGet Package Swashbuckle. Right-click in the project and click ‘Manage NuGet Package’ as seen in the next image:

    How to Install Swashbuckle in Swagger

    Add New NuGet Package

    Once you have installed Swashbuckle to your project, you can find a SwaggerConfig file in App_Start:

    Swashbuckle NuGet Package

    Swashbuckle NuGet Package Installed

    Next, go to Project Properties. In the Build section, check > XML documentation file. Copy that path for future reference in code. XML documentation comments will be used in Swagger. See below:

    XML Documentation Path Swagger

    XML Documentation file path

    Now, go to the SwaggerConfig file added in the App_Start folder. Search for c.IncludeXmlComments(GetXmlCommentsPath()) and uncomment it. Write the following ASP.Net code snippet in the SwaggerConfig class.

    private static string GetXmlCommentsPath()
            {
                return System.AppDomain.CurrentDomain.BaseDirectory + @"\bin\SampleSwaggerAPI.xml";
            }
    
    

    Next, run the VS project and add /Swagger in the URL: http://localhost:49498/swagger. You should see something that looks like the image below:

    Swagger API Example

    Test Swagger API

    Swagger API tutorial

    Implementing Swagger in ASP.Net Conclusion

    We hope this article will be helpful for developers wanting to get started with Swagger in ASP.Net. Swagger defines a set of rules and tools to semantically define APIs for developers, which we have covered above. Look for more Swagger API tutorials in the near future!

    Read: How to Deploy a Webjob in Azure

    The post Implement Swagger In ASP.NET Web API appeared first on CodeGuru.

    ]]>
    Why Should I Move to the Cloud? https://www.codeguru.com/soap/why-cloud-migration/ Fri, 15 Oct 2021 17:18:32 +0000 https://www.codeguru.com/?p=18604 There is a lot of talk about moving to the cloud among IT business leaders. However, despite the fact that cloud technology has existed since 2006 (and was invented in 1960), not everyone can say they have a clear idea of what the cloud is, what the cloud types are, the applications of the cloud, […]

    The post Why Should I Move to the Cloud? appeared first on CodeGuru.

    ]]>
    Benefits of Cloud Migration

    There is a lot of talk about moving to the cloud among IT business leaders. However, despite the fact that cloud technology has existed since 2006 (and was invented in 1960), not everyone can say they have a clear idea of what the cloud is, what the cloud types are, the applications of the cloud, and so on. Further, many developers may feel they have their own on-premise servers and infrastructures, so why on earth would they want to suffer the pain of migration into the cloud? What are the benefits of moving to the cloud? How do security and privacy work in the cloud?

    All of these questions are legitimate; businesses should not change their technology model just to embrace the latest tech. Migrating to the cloud, in fact, has many benefits for developers, which we will be discussing in today’s article, as well as answer some of the legitimate concerns of migration.

    Cloud Compute versus Classic Compute

    In this section we will cover some of the differences between classic computing and cloud computing and discuss software as a service (SaaS) and platform as a service (PaaS).

    In the past, we used to download applications from the Internet and run programs on our physical computers and servers. Using cloud computing software as a service, there is no longer the headache of searching to download files, install them, and update the application when new features of bug fixes come out. Instead, users can just go ahead and enjoy using software online in a ready-to-use situation without ever needing to worry about application maintenance. That’s just one side of the cloud, however – there are many more elements to it.

    When it comes to comparing platform as a service (PaaS), there are also many benefits over traditional computing. For example, with a cloud-based development platform, you can easily run and manage your applications and not have to be bothered with maintaining and updating all the hardware and software, including operating systems, storage, networking, databases, middleware, runtimes, frameworks, development tools, security, upgrades, backups, and more.

    With regards to teams and user management in the cloud, cloud not only allows for multiple hierarchical organizations of team users (similar to classic computing architecture), but also enables developers to work in real-time, synchronized from anywhere in the world, which is certainly a benefit for the current and post-pandemic world.

    In truth, we are already using many cloud services, even if we do not realize it explicitly. Posting on social media, checking online bank balances, writing in online document editor software, receiving and sending email, and so on.

    Advantages of Cloud Computing

    There are many businesses that have taken the leap to the cloud and are taking advantage of the benefits of cloud computing, which include reducing costs and increasing efficiency and income. Other benefits of migrating to the cloud include:

    • Reducing infrastructure costs: Instead of purchasing your own expensive infrastructure, just pay for your actual use. It is usually expressed as capital expenditure vs operating expenses expenditure.
    • Pay-as-you-go model: You can scale your usage up and down as you want without purchasing more hardware or leaving it unused.
    • Migrate your database and even get your preferred legacy software to the cloud.
    • Make your choice between private, public, hybrid, and multi-cloud as your needs dictate.
    • Increasing business agility and letting your remote team work in real-time with the office team.
    • Increase security: All cloud-based solutions have their security responsibility and all-in-one security solutions, such as the Zero-trust model, are standard in most offerings.
      Facilitates digital and cloud transformation; they have their own plans to make migration easier.
    • Leveraging new technologies: The cloud environment capabilities enable more powerful, new-generation software and technologies such as AI and IoT.
    • Keep your resources (hardware and software) up to date automatically with no downtime. Get constant, fast, and reliable performance.
    • Access your resources from anywhere, whether your team needs to work from home or remotely.
    • Keep your data safe in case of disaster; cloud has built-in backup and recovery plans and solutions thanks to its multi-zone servers.

    How to Get Started with the Cloud

    The first step to getting started in the cloud is to migrate your existing work from an on-premise based infrastructure to the cloud. But what is cloud migration? It is the moving process of your digital operations and assets, including any other legacy infrastructure, data, and software, to the cloud.

    Here are some key features to look for when choosing a cloud provider:

    • Productivity: search for the efficiency level that meets your business’s requirements.
    • Security: make sure the cloud provider has a suitable security type for your data and overall requirements.
    • Accessibility: ensure you can access your software from anywhere you want and from any device properly.
    • Support: get a sense of their technical support team and its experiences; read plenty of reviews and testimonials.
    • Conduct a careful cost-benefit study, especially if your sector still has not migrated.
    • Get details of their cloud service and get to know the specifications of their infrastructure hardware and virtual machine technology.
    • Take a free demo to make sure the solution works for your needs.
    • Get help from other honest and proven cloud IT services experts and don’t try to migrate alone.

    Read: Successful Cloud Migration with Automated Discovery Tools.

    The post Why Should I Move to the Cloud? appeared first on CodeGuru.

    ]]>
    C++ programming: How does Shell Context Menu Work ? – Part 2 https://www.codeguru.com/soap/c-programming-how-does-shell-context-menu-work-part-2/ Fri, 13 Aug 2010 13:07:00 +0000 https://www.codeguru.com/uncategorized/c-programming-how-does-shell-context-menu-work-part-2/ In part 1 we understood what a context menu was and how Visual studio helps to create one at the design time and runtime level on a windows form. In this tutorial we are going touch base on some of the key aspects of Context Menu controlled and generated by the shell explorer with the […]

    The post C++ programming: How does Shell Context Menu Work ? – Part 2 appeared first on CodeGuru.

    ]]>
    In part 1 we understood what a context menu was and how Visual studio helps to create one at the design time and runtime level on a windows form.
    In this tutorial we are going touch base on some of the key aspects of Context Menu controlled and generated by the shell explorer with the help of the demo code included with this article.

    You should have observed that the context menu items change wrt to the object or context on which a right click is performed. These activities are actually controlled by Windows Shell Explorer.

    The Shell Explorer is also called the file manager that is shipped with every release of Windows Operating system. It provides a graphical user interface for managing files. The process that host the Shell Explorer is the explorer.exe

    Here’s a list of operations Shell Explorer is responsible for

    1. Render the graphics for the taskbar and the desktop
    2. Render the graphics for windows, folders, icons, file menu’s, toolbars
    3. Render the graphics for the start menu
    4. Render the graphics to display the tree structure of your file system, which is also called the explorer
    5. It provides the search engine feature for your file system
    6. And most important and the topic of this article – the context menu

    (Try this: On your computer kill the explorer.exe process. You would observe that the task bar disappears, the desktop shows no sign of any icons. Active windows on your desktop would still be visible, minimizing the same would make them disappear.

    Note that following the above step will not crash or delete data from your computer. You can bring back your desktop by launching the Task Manager, goto File->New Task, and then type explorer.)

    The post C++ programming: How does Shell Context Menu Work ? – Part 2 appeared first on CodeGuru.

    ]]>
    How to Write a COM+ Component https://www.codeguru.com/soap/how-to-write-a-com-component/ Tue, 05 Jan 2010 18:17:36 +0000 https://www.codeguru.com/uncategorized/how-to-write-a-com-component/ Introduction COM+ is a great framework for Enterprise Development. I now want to introduce to how you can write a component to be use by the COM+ runtime. let’s go! Implementation Open VS2008 (VS2003 and VS2005 are also okay) and select IDE File | New | Project…. The Project Wizard will pop up. Choose Visual […]

    The post How to Write a COM+ Component appeared first on CodeGuru.

    ]]>
    Introduction

    COM+ is a great framework for Enterprise Development. I now want to introduce to how you can write a component to be use by the COM+ runtime.

    let’s go!

    Implementation

    Open VS2008 (VS2003 and VS2005 are also okay) and select IDE File | New | Project…. The Project Wizard will pop up. Choose Visual C++ | ATL and then name it component. Click OK. Click next on the first “Welcome to the ATL Project Wizard” and select Support COM+ 1.0,

    Click Finish so that the IDE generate the code stuff for you.

    Now you can add the interface that will be used in the COM+ runtime. Select Project | Add class… and then choose ATL | ATL COM+ 1.0 Component

    Click Add and name name it Bird. Click next to the COM+ 1.0 page, and then check IObjectControl and IObjectConstruct, and I wanna support transaction and I choose Required.

    Click Finish.

    Now you will add a method Fly by adding the following:

    [id(1), helpstring(“method Fly”)] HRESULT Fly([out,retval] LONG* lSpeed);

    and implement it as follows:


    STDMETHODIMP CBird::Fly(LONG* lSpeed)
    {
    // TODO: Add your implementation code here
    *lSpeed = 0xbee;
    return S_OK;
    }

    That’s all regarding the code. Now you will install it!

    First, create an empty COM+ application

    1. In the console tree of the Component Services administrative tool, select the computer on which you want to create an application.
    2. Select the COM+ Applications folder for that computer.
    3. On the Action menu, point to New, and then click Application. You can also right-click the COM+ Applications folder, point to New, and then click Application.
    4. On the Welcome page of the COM+ Application Install Wizard, click Next, and then in the Install or Create a New Application dialog box, click Create an empty application.
    5. In the box provided, type a name for the new application. (Note that the following special characters cannot be used in an application name: \, /, ~, !, @, #, %, ^, &, *, (, ), |, }, {, ], [, ‘, “, >, <, ., ?, :, and ;.) Under Activation type, click Library application or Server application. Click Next.
      Note that a server application runs in its own process. Server applications can support all COM+ services. A library application runs in the process of the client that creates it. Library applications can use role-based security but do not support remote access or queued components.
    6. In the Set Application Identity dialog box, choose an identity under which the application will run. If you select This user, type the user name and password. You must also retype the password in the Confirm password box. Click Next. (The default selection for application identity is Interactive User. The interactive user is the user logged on to the server computer at any given time. You can select a different user by selecting This user and entering a specific user or group.)
      Note that the Set Application Identity dialog box appears only if you selected Server application for the new application’s activation type in the COM Application Install Wizard’s preceding dialog box. The identity property is not used for library applications.
    7. In the Add Application Roles dialog box, add any roles you want to associate with the application. By default, only the CreatorOwner role is defined.
    8. In the Add Users to Roles dialog box, populate each role you created in the last step with the users, groups, or built-in security principals to which you want to grant the privileges associated with that role. By default, the interactive user is placed in the CreatorOwner role.
    9. Click Finish.

    Now you can add a component to a COM+ application

    1. In the console tree of the Component Services administrative tool, select the computer hosting the COM+ application.
    2. Open the COM+ Applications folder for that computer, and select the application in which you want to install the component(s).
    3. Open the application folder and select Components.
    4. On the Action menu, point to New, and then click Component. You can also right-click the Components folder, point to New, and then click Component.
    5. On the Welcome page of the COM+ Application Install Wizard, click Next, and then in the Import or Install a Component dialog box, click Install new components.
    6. In the Install new components dialog box, click Add to browse for the component you want to add.
    7. In the Select files to install dialog box, type the filename of the component to install or select a filename from the displayed list. Click Open.

      After you add the files, the Install new components dialog box displays the files you have added and their associated components. If you select the Details check box, you will see more information about file contents and the components that were found. Note that theunconfigured COM components must have a type library. If COM+ cannot find your component’s type library, your component will not appear in the list. You can also remove a file from the Files to install list by selecting it and clicking Remove.

    8. Click Next, and then click Finish to install the component.

    Ok, you have installed it!

    Next you will use it on a remote machine. Right click the COM+ Application and Export. Note that you should choose Application Proxy there, then choose a folder to store the proxy. Include a msi file and a cab file. Copy them to remote machine, and click the msi file. The application proxy information is installed into the COM+ catalog and is visible in the Component Services administrative tool. You can find the component in %Drier%\Program Files\ComPlus Applications folder. Now you can write a client application. For instance, if you use C#, you can reference this DLL from %Drier%\Program Files\ComPlus Applications
    and use the Bird.Fly() method.

    Other stuff

    This is only a simple guide to COM+ programming! Please send your feedback!

    The post How to Write a COM+ Component appeared first on CodeGuru.

    ]]>
    Displaying the Input Language indicator in a WTL dialog https://www.codeguru.com/soap/displaying-the-input-language-indicator-in-a-wtl-dialog/ Mon, 06 Jul 2009 16:25:38 +0000 https://www.codeguru.com/uncategorized/displaying-the-input-language-indicator-in-a-wtl-dialog/ Introduction I recently had to create a dialog containing a password field, in which I wanted to display the current Keyboard Input Language indicator. This is useful when the user is working in a multi-language environment and enters text in a password field and cannot see the text being entered. For example, the user has […]

    The post Displaying the Input Language indicator in a WTL dialog appeared first on CodeGuru.

    ]]>
    Introduction

    I recently had to create a dialog containing a password field, in which I wanted to display the current Keyboard Input Language indicator. This is useful when the user is working in a multi-language environment and enters text in a password field and cannot see the text being entered. For example, the user has a US keyboard with Hebrew characters on the keys and can switch between English (EN) and Hebrew (HE). This indicator is the same as one sees in the standard Windows Logon or Change Password dialogs when more than one input language has been configured. When the user toggles the input language via the Language Bar (or using a hotkey combination such as left Alt-Shift), the indicator changes to match the Language Bar.

    (To configure additional input languages open Control Panel, click on Regional and Language Options, click on the Languages tab and click the Details… button under Text services and input languages.)

    WM_INPUTLANGCHANGE

    After a bit of digging around I read in MSDN that whenever the user changes the keyboard input language, first the WM_INPUTLANGCHANGEREQUEST message is posted to the window that currently has the focus. The application accepts this change by passing the message to DefWindowProc (this happens automatically within the WTL framework by default), or rejects the change by handling the message (so that it effectively never arrives at DefWindowProc). Once the change is accepted the WM_INPUTLANGCHANGE message is posted to the topmost affected window which once again passes this to DefWindowProc, which in turn passes the message to all first-level child windows, and so on.

    However one cannot simply add a handler to a dialog’s message map – these messages never enter the message map’s ProcessWindowMessage function! (I found the main application window will receive WM_INPUTLANGCHANGEREQUEST in it’s override of CMessageFilter::PreTranslateMessage but never receives WM_INPUTLANGCHANGE, and if a modal dialog is being displayed at the time then it receives neither. Similarly PreTranslateMessage in the modal dialog itself never sees either of these messages.)

    What actually happens is exactly as stated in Microsoft’s documentation – these messages are “posted to the window that currently has the focus“. This does not mean one’s modal dialog, it means the control within the dialog that currently has the focus. In order for a dialog to receive these messages it has to superclass every child control that can potentially receive focus. For example if one has a password dialog with two edit text controls (username and password) and two buttons (OK and Cancel), then there are four controls that can receive focus by the user clicking on them (the text controls) or tabbing to them (all four). If one of these four controls has focus when the user presses Alt-Shift to change the input language, then the messages of interest are sent to that control only.

    Intercepting Windows messages sent to a dialog’s child controls

    To intercept these messages one adds a CContainedWindow member to the dialog class for each control and then subclasses each control in order to redirect it’s message map to that within the dialog class. The dialog’s message map will contain an alternate message map for these controls where a handler can be added for the WM_INPUTLANGCHANGE message. Any messages not handled by the alternate message map are routed to the controls, but now the dialog gets to see them first! One will have CContainedWindow members for all the controls (and perhaps other data members such as CString to conveniently initialise/retrieve the text values). (If one’s dialog contains other controls that cannot receive focus, such as a bitmap or disabled control, then no special handling is required for these.)

    private:
       CContainedWindow   _cwUsername;
       CContainedWindow   _cwPassword;
       CContainedWindow   _cwOk;
       CContainedWindow   _cwCancel;
    

    CContainedWindow’s constructor takes three arguments: the class name of an existing window class to base the control on (but here we want to attach to an existing control so this is left null), a pointer to the object containing the message map (the dialog class) and the message map ID of the message map to process the messages (the default message map has an ID of 0, so here one can specify an alternate message map ID such as 1). This is initialised in the constructor of the dialog class:

    PasswordDlg::PasswordDlg() :
        _cwUsername(0, this, 1),
        _cwPassword(0, this, 1),
        _cwOk(0, this, 1),
        _cwCancel(0, this, 1) {}
    

    Subclassing the controls (and retrieving the control text via CString members) is most easily achieved using ATL’s DDX support. So firstly stdafx.h should include these lines:

    //...
    #include <atlddx.h>     //For DDX support
    #include <atlcrack.h>   //For message maps
    //...
    

    Then define the message and DDX maps in PasswordDlg.h:

    //...
    class PasswordDlg : public CDialogImpl<PasswordDlg>,
                        public CWinDataExchange<PasswordDlg>   //For DDX
    {
    //...
       BEGIN_MSG_MAP_EX(PasswordDlg)
          MSG_WM_INITDIALOG(OnInitDialog)                     //Needed to subclass the controls and set the initial language indicator
          MSG_WM_CTLCOLORSTATIC(OnCtlColorStatic)             //Needed to change the language indicator to white on blue
          COMMAND_HANDLER_EX(IDCANCEL, BN_CLICKED, OnCancel)
          COMMAND_HANDLER_EX(IDOK, BN_CLICKED, OnOk)
          //The alternate message map for the controls
          ALT_MSG_MAP(1)                                      //Default messageMapId is 0 so I use 1 for the first alternate messageMapId
             MSG_WM_INPUTLANGCHANGE(OnInputLangChange)        //The handler to be notified of the language change
       END_MSG_MAP()
    
       BEGIN_DDX_MAP(PasswordDlg)
          DDX_TEXT(IDC_USERNAME, _username)
          DDX_TEXT(IDC_PASSWORD, _password)
          DDX_TEXT(IDC_LANGUAGE, _inputLanguage)
          DDX_CONTROL(IDC_USERNAME, _cwUsername)
          DDX_CONTROL(IDC_PASSWORD, _cwPassword)
          DDX_CONTROL(IDOK, _cwOk)
          DDX_CONTROL(IDCANCEL, _cwCancel)
       END_DDX_MAP()
    
    private:
    //...
        BOOL OnInitDialog(CWindow wndFocus, LPARAM lInitParam);
        HBRUSH OnCtlColorStatic(CDCHandle dc, CStatic wndStatic);
        void OnCancel(UINT uNotifyCode, int nID, CWindow wndCtl);
        void OnOk(UINT uNotifyCode, int nID, CWindow wndCtl);
        void OnInputLangChange(DWORD dwCharSet, HKL hKbdLayout);
    
    private:
        CString             _username;
        CString             _password;
        CString             _inputLanguage;
        CContainedWindow    _cwUsername;
        CContainedWindow    _cwPassword;
        CContainedWindow    _cwOk;
        CContainedWindow    _cwCancel;
    };
    

    The resource IDs are defined in resource.h:

    //...
    #define IDD_PASSWORD                130
    #define IDC_USERNAME                1000
    #define IDC_PASSWORD                1001
    #define IDC_LANGUAGE                1002
    

    The dialog resource is defined in PasswordDlg.rc. Notice the static text label to display the language (IDC_LANGUAGE):

    IDD_PASSWORD DIALOGEX 0, 0, 201, 66
    STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_CAPTION | WS_SYSMENU
    CAPTION "Enter Password"
    FONT 8, "MS Shell Dlg", 400, 0, 0x1
    BEGIN
        DEFPUSHBUTTON   "&OK",IDOK,88,44,50,14
        PUSHBUTTON      "&Cancel",IDCANCEL,143,44,50,14
        EDITTEXT        IDC_USERNAME,65,8,127,12,ES_AUTOHSCROLL | NOT WS_TABSTOP
        EDITTEXT        IDC_PASSWORD,65,25,127,12,ES_PASSWORD | ES_AUTOHSCROLL
        LTEXT           "&User name:",IDC_STATIC,11,10,47,8
        LTEXT           "&Password:",IDC_STATIC,11,27,47,8
        CTEXT           "EN",IDC_LANGUAGE,9,50,12,10,SS_CENTERIMAGE
    END
    

    The initial value for the language indicator is set in OnInitDialog in PasswordDlg.cpp using GetKeyboardLayout to get the active input locale ID followed by GetLocaleInfo with the flag LOCALE_SABBREVLANGNAME to lookup the three-letter language name. The first two letters are what are diaplayed (the third letter is the sublanguage):

    //...
        TCHAR langName[KL_NAMELENGTH] = {0};
        if(::GetLocaleInfo(MAKELCID(::GetKeyboardLayout(0), 0), LOCALE_SABBREVLANGNAME, langName, KL_NAMELENGTH))
        {
            _inputLanguage = langName;
            _inputLanguage = _inputLanguage.Left(2);
        }
        DoDataExchange();   //Subclass the controls and initialise the language indicator (and default username if used).
    

    The language text is displayed as white on blue by adding a handler for WM_CTLCOLORSTATIC:

    //...
    HBRUSH PasswordDlg::OnCtlColorStatic(CDCHandle dc, CStatic wndStatic)
    {
        int nId = wndStatic.GetDlgCtrlID();
        if(IDC_LANGUAGE == nId)
        {
            ::SetTextColor(dc, RGB(255, 255, 255));
            ::SetBkColor(dc, RGB(0, 0, 255));
            return (HBRUSH)GetStockObject(NULL_BRUSH);
        }
        SetMsgHandled(FALSE);
        return 0;
    }
    

    Finally the change notification is used to update the indicator in the handler for WM_INPUTLANGCHANGE:

    //...
    void PasswordDlg::OnInputLangChange(DWORD /*dwCharSet*/, HKL hKbdLayout)
    {
        TCHAR langName[KL_NAMELENGTH] = {0};
        if(::GetLocaleInfo(MAKELCID(hKbdLayout, 0), LOCALE_SABBREVLANGNAME, langName, KL_NAMELENGTH))
        {
            _inputLanguage = langName;
            _inputLanguage = _inputLanguage.Left(2);
            DoDataExchange(false, IDC_LANGUAGE);
        }
        SetMsgHandled(FALSE);
    }
    

    The attached demo (see downloads below) was written in Visual Studio 2005 using the latest version of WTL (8.0).

    The post Displaying the Input Language indicator in a WTL dialog appeared first on CodeGuru.

    ]]>
    Framework Source Code Stepping https://www.codeguru.com/soap/framework-source-code-stepping/ Mon, 08 Sep 2008 18:37:00 +0000 https://www.codeguru.com/uncategorized/framework-source-code-stepping/ The first step in enabling source code step-through with MFC/ATL and the C/ C++ Runtime Libraries (CTR) is to ensure that the source code is installed when Visual Studio is installed. Figure 1 shows the Visual Studio installer options for source code installation. There are separate nodes for CRT and ATL/MFC installation, and granular control […]

    The post Framework Source Code Stepping appeared first on CodeGuru.

    ]]>
    The first step in enabling source code step-through with MFC/ATL and the C/ C++ Runtime Libraries (CTR) is to ensure that the source code is installed when Visual Studio is installed. Figure 1 shows the Visual Studio installer options for source code installation. There are separate nodes for CRT and ATL/MFC installation, and granular control of source code installation based on character byte width and thread safety is also available. With the source code successfully installed, the options shown in Figure 2 should be displayed.

    Figure 1: Installing the Visual C++ Source Code Libraries

    Figure 2: Visual C++ Source Code Library Paths

    With the options shown in Figure 1 and Figure 2 set, stepping into the CRT and MFC/ATL source code is extremely simple—the compiler treats the library source code the same as other source code for the project, and stepping into a CRT, ATL, or MFC method can be accomplished with a simple Step-Into Debug command. If the Microsoft Symbol Server (covered in this article) is configured to bring down debug symbol information, it is important that the PDB files that install as part of Visual Studio and the Visual Studio Service Packs are configured to be searched before the Microsoft Symbol Server. The debug symbols that the Microsoft Symbol Server brings down have the source code information stripped out of them, and if these are loaded in preference to the debug symbol files that ship with Visual Studio and its Service Packs, stepping into the CRT, MFC and ATL source will not be possible. The correct settings for the symbol file location are shown in Figure 3.

    Figure 3: Symbol File Location

    If the debug symbol files have been downloaded from the Microsoft Symbol Server previously, it is necessary to delete the symbol files from the local cache as well as adding the c:\windows\symbols\dll path to the symbol search path. The Module debug Window can be used to inspect where the debug symbols for a particular DLL have been loaded from, and whether they contain source code information. Figure 4 shows the Modules window with this information displayed.

    Figure 4: Module Debug Window

    Visual C++ 2008 Service Pack 1, which contains the MFC Feature Pack (see these two previous articles for coverage of the MFC Updates and TR1 enhancements) in addition to a number of bug fixes, contains updates debug symbol files and source code files that allow all the new Feature Pack functionality to be stepped through.

    The post Framework Source Code Stepping appeared first on CodeGuru.

    ]]>
    Professional System Library: Introduction https://www.codeguru.com/soap/professional-system-library-introduction/ Fri, 22 Aug 2008 16:06:02 +0000 https://www.codeguru.com/uncategorized/professional-system-library-introduction/ Foreword Most software developers share the same pattern in their professional career of having to deal with projects of a similar nature, although a few developers manage to jump from one project to a completely different one. Personally for me, I found one particular area that has always taken a great deal of my time […]

    The post Professional System Library: Introduction appeared first on CodeGuru.

    ]]>

    Foreword

    Most software developers share the same pattern in their professional career of having to deal with projects of a similar nature, although a few developers manage to jump from one project to a completely different one.

    Personally for me, I found one particular area that has always taken a great deal of my time from one project to another, and that’s coding around retrieving and changing information about the system/OS, about the current process, threads, various hardware and software configuration of the system, security context, and so on. I do hope this all sounds very familiar to many developers.

    Not only does one have to spend some time trying to figure out how to access the same information required, depending on which development platform is being used, but also this knowledge is very hard to interpolate from one platform to another. For example, VC++, VB6, C#, Delphi, Office—all so different in every way, your code from one may seem totally unusable in the other.

    Then, the complexity kicks in. If we can pull just about any trick in C++ in quite a natural way, doing the same things in more restrictive environments either impossible or breaks the seemingly integrity of the system, like using unmanaged code in .NET or lots of external procedure imports for VB6, and so on.

    Out of systematic practice in various development environments came the idea to summarise all my knowledge in this area and offer software developers a simple and unified way in which all such information can be accessed easily, in the same way in any development environment, and with the very minimum of effort.

    This whole article is an introduction to the initiative of writing a library that would allow easy access to most frequently used information in the system, client’s process, and environment.

    It is a very recently started project (July 2008). All additional information about this project as well as the development efforts for it I am trying to organize on the www.prosyslib.org website.

    Introduction

    There are four ways in which an application can retrieve information from the system are as follows:

    1. Standard Windows API
    2. Undocumented Windows API
    3. Direct access to the system: Windows Registry + File System
    4. WMI (Windows Management Interface)

    When it comes to choosing which method is best to use our choice depends mostly on the following criteria (given in the order in which your average developer looks at these things):

    1. Complexity of implementation
    2. Reliability
    3. Speed of execution
    4. Resource consumption

    Professional System Library (ProSysLib) is a project that offers unification in accessing information about process/system/environment where the developer would no longer have to make a tough choice looking at these criteria trying to decide which one is most important or which can be sacrificed.

    ProSysLib presents all information using the concept of a root namespace, very much similar to that in .NET where System is the root namespace for everything. Much like it, ProSysLib has its own System root namespace that defines the entry point for all the sub-namespaces and functionality of the library.

    The picture at the beginning of this article shows the top hierarchy of namespaces below System. These define the basis for further classification of all the information that can be retrieved.

    Technology Highlights

    ProSysLib DLL is a Unicode COM in-process server that uses a Neutral memory model. It is Thread-Safe, and implements Automation interfaces only. The protocol (type library) declarations of 32-bit and 64-bit versions of the component are identical, thus allowing transparent integration with development tools that can mix 32-bit and 64-bit modes, while using the same type library (signature) of the component. ProSysLib is also immune to the DLL Hell problem (read ProSysLib SDK for details).

    Implementation is done entirely in VC++ 2008, using only COM, ATL, STL, and Windows API.

    The entire ProSysLib framework is based on Just-On-Time Activation, which means that each and every namespace and object is instantiated and initialized only when used by the client application for the first time; until then, ProSysLib is completely weightless resource-wise.

    At the moment of publishing this article, only a few objects and namespaces of the library were introduced. As for the rest of the namespaces, properties and methods, if an application tries to use them, the library will throw the COM exception “NOT IMPLEMENTED” to tell you that you are trying to use something in the library that has been declared but not yet implemented.

    Using the Code

    Because the library concept is built upon a root namespace, it is the only interface that needs to be created by the client application to have access to everything else, much like System namespace in .NET. In fact, the foo-proof implementation of the library won’t let you create any other interface of the library even if you try.

    Declaring and instantiating a variable in different development environments can look different from each other, although using it will look pretty much the same. In this article, you simplify all your code examples for C# clients only. Any developer should be able to work it out how this would look in his environment of choice.

    // Declare and instantiate ProSysLib root namespace;
    PSLSystem sys = new PSLSystem();
    

    Now, by using this variable you can access anything you want underneath the root namespace.

    Even though ProSysLib is targeted to implement access to many kinds of information, the project started only recently, and there are not that many features implemented so far. However, I did not want to draw abstractions with a finger in the air; one can figure them out by looking at the ProSysLib Documentation, so all the examples provided here below are real ones; in other words, they are fully functional already.

    So, consider a few examples of what you can do with ProSysLib as of today, which is less than one month from the beginning of the project.

    Privileges

    Many applications need to know about and control the availability to the process privileges. For instance, the Debug privilege can be important when accessing some advanced information in the system that’s otherwise unavailable. ProSysLib provides a collection of all available privileges under the namespace of PSLSystem.Security.Privileges. If one needs to enable the Debug privilege in the process, the code would be as shown here:

    sys.Security.Privileges.Find("SeDebugPrivilege").Enabled = true;

    In other words, you access the collection of privileges, locate the privilege of interest, and enable it. You normally would need to verify that the Find method successfully located the privilege in the list, but because the Debug privilege is always available, you can simplify it here.

    Process Enumeration

    One of very popular subjects that can be found in articles here is about enumerating all available processes in the system, or finding a particular process, or how to kill a process by name, and the like. ProSysLib enumerates all processes running in the system under the PSLSystem.Software.Processes namespace. This collection is very flexible and allows any kind of operation one needs to do with processes in the system.

    Attached to this article is a simple C# application that shows just one example of how ProSysLib can be used. The example enumerates either all processes or the ones that were launched under the current user account. It displays just some of the available information about each process, and allows killing any process with the press of a button.

    Here is just a small code snippet from the example where you populate a list view object with information about processes:

    // Go through all processes;
    foreach (PSLProcess p in sys.Software.Processes)
    {
       ListViewItem item = ProcessList.Items.Add(p.ProcessID.ToString());
    
       string sProcessName = "";
       if (p.ProcessID == 0)    // System Idle Process
          sProcessName = "System Idle Process";
       else
       {
           sProcessName = p.FileName;
           // If OS is 64-bit, and the found process
           // is not 64-bit, we add " *32" like in TaskManager;
           if (sys.Software.OS.Is64Bit && p.Is64Bit == false)
                sProcessName += " *32";
       }
    
       // Adding some of the available process details...
       //
       // NOTE: p.FilePath for 64-bit processes will always be
       // empty for a 32-bit client running on a 64-bit system,
       // simply because Windows prohibits 32-bit processes to
       // have any access to 64-bit-specific folders;
    
       item.SubItems.Add(sProcessName);
       item.SubItems.Add(p.FilePath);
       item.SubItems.Add(p.UserName);
       item.SubItems.Add(p.ThreadCount.ToString());
       item.SubItems.Add(p.HandleCount.ToString());
    
       // Associate each list item with the process object;
       item.Tag = p;
    }
    

    The demo application binary is provided both in 32-bit and 64-bit versions. One of the very interesting things about ProSysLib is that it offers new ways of efficient COM usage, bypassing the COM layer/requirements altogether. Both 32-bit and 64-bit versions will unpack and allow you to run the demo application, using ProSysLib COM library in spite of the fact that you do not register the component on your PC. If somebody assumes that this is COM Isolation for .NET, he would be wrong. This is an implementation of Stealth Deployment for COM, which I invented in my long practice of distributing COM projects. A full description of the idea is given in ProSysLib SDK, in the Deployment chapter.

    The post Professional System Library: Introduction appeared first on CodeGuru.

    ]]>
    Better Visual C++ Debugging https://www.codeguru.com/soap/better-visual-c-debugging/ Tue, 08 Jul 2008 17:26:46 +0000 https://www.codeguru.com/uncategorized/better-visual-c-debugging/ Debug symbols are one of the key elements of an effective debugging session. Debug symbols keep a mapping of the source code to the generated EXE or DLL binary; this allows the debugger to set breakpoints at source code locations and display rich debugging information when a breakpoint is hit. Debug symbols are stored in […]

    The post Better Visual C++ Debugging appeared first on CodeGuru.

    ]]>
    Debug symbols are one of the key elements of an effective debugging session. Debug symbols keep a mapping of the source code to the generated EXE or DLL binary; this allows the debugger to set breakpoints at source code locations and display rich debugging information when a breakpoint is hit. Debug symbols are stored in program database (PDB) files, and these are generated in Visual C++ 2008 for both debug and release by default. The project setting for debug file generation is located in the Linker | Debugging section, as shown in Figure 1.

    Figure 1: Controlling Debug File Generation

    For most small-team development, ensuring that all the projects that make up a solution are generating debug information is typically all that is required to ensure a good debugging experience. As the size of the development team grows and as bugs become more interesting, better debugging symbol support is required. For large teams, having all the projects required to build an application within the same solution will eventually become too time consuming from a build perspective. For DLLs that change infrequently and for DLL dependencies built by other sub-teams, excluding the project for these DLLs from the solution makes sense.

    The problem with excluding DLLs from a solution occurs when a breakpoint needs to be set in the external project or when the call stack contains significant calls from the binary dependency. Unless the PDB file also has been copied over with the DLL, breakpoints from functions from within the DLL are difficult to set and the call stack will be missing the function names. While manually building the required DLL to generate the PDB file is not usually an overly difficult task, this process can become tedious when it needs to be repeated frequently.

    The solution to this problem is to locate all the PDB files from common DLLs onto a network share and get Visual Studio to download automatically and cache the PDBs. Prior to Visual Studio 2005, this was a difficult exercise that involved using the Microsoft Symbol Server DLL (SymSrv.dll) from the Debugging Tools for Windows toolkit and setting up some Windows environmental variables (Knowledge Base Article 311503 details the procedure). Starting with Visual Studio 2005 and continuing with Visual Studio 2008, the download of debug symbols from an intranet server can be achieved easily by using the Visual Studio settings shown in Figure 2.

    Figure 2: Download Debug Symbols from an Intranet Server

    The dialog allows one or more PDB locations to be specified, and optionally allows a path to be provided where the debug symbols can be cached locally. As with a local PDB file, Visual Studio and the symbol server will only download and cache a PDB if it matches the DLL that has been loaded by the process. Locally cached files are stored with a check-sum in the folder path, so multiple PDB files that correspond to many different builds of a binary can be stored locally at the same time.

    PDB files store the full path of the source code files that were used to build the binary, and if the files are in a different location on the machine where the binary is being debugged, Visual Studio will display a prompt asking for the location of the source code file. It is generally a good idea to ensure that all the developers within a team are using the same folder and drive structure to store the source code files that they pull down from the source repository to avoid path mismatches and a number of other similar issues.

    Even with the best development environment setup, it is possible that partway through a debugging session, it will become apparent that the debug symbols for a particular DLL haven’t been loaded. If it is simple to reproduce the application state in the debug session, uploading the PDB file to the symbol server and restarting the debugger will generally be the best option. However, if reaching the same point in the debugging session would take a fair amount of time and effort, it is possible to manually load the PDB file from the Modules window, as shown in Figure 3.

    Figure 3: Manually Loading Debug Symbols

    The post Better Visual C++ Debugging appeared first on CodeGuru.

    ]]>