Gavin Lanata, Author at CodeGuru https://www.codeguru.com/author/gavin-lanata/ Mon, 09 Aug 2021 21:40:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Migrating to ASP.NET Core 2.0 https://www.codeguru.com/dotnet/migrating-to-asp-net-core-2-0/ Mon, 28 Aug 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/migrating-to-asp-net-core-2-0/ Before we get under way talking about migrating a simple ASP.NET Core 1.1 app to 2.0, we need to keep in mind that 1.1.2 is under LTS support, and that upgrading to 2.0 might not be for you. If you want more information on the current support levels of the various versions of .NET Core, […]

The post Migrating to ASP.NET Core 2.0 appeared first on CodeGuru.

]]>
Before we get under way talking about migrating a simple ASP.NET Core 1.1 app to 2.0, we need to keep in mind that 1.1.2 is under LTS support, and that upgrading to 2.0 might not be for you. If you want more information on the current support levels of the various versions of .NET Core, you can find that here.

If you’ve decided to go ahead and want to migrate to 2.0, you’ll also need the 2.0 SDK if you haven’t already installed it. You can find that here.

Once installed, you can use the 2.0 SDK to code against 1.1 applications if you want, and I can confirm this because I’ve removed all previous versions of the SDK from my system and had a good play around with older projects.

Also, please note that I’ll be using VS 2017 for this article. It uses the new csproj file and is upgraded to the latest version; this is required. If you’re using a 1.1 application under VS 2015, you may still be using the old project.json. If this is the case, you can find assistance here…

Upgrading to 2.0

I’m going to walk through the upgrade using an empty 1.1 ASP.NET Core application, using the template provided by VS. And, the first thing we need to do is change our target framework. Edit the project’s csproj file, and you can use Figure 1 as a reference. Right-click the project and click ‘Edit *.csproj.’

Editing the csproj file
Figure 1: Editing the csproj file

If you’re new to VS 2017, you now are able to edit the csproj file while the project is open. There is no need to unload the project first. And, once we have this file open, we need to change this line:

<TargetFramework>netcoreapp1.1</TargetFramework>

to

<TargetFramework>netcoreapp2.0</TargetFramework>

Also, while we’re in the csproj file, we can update our package references. For example, we can change this line:

<ItemGroup>
   <PackageReference Include="Microsoft.AspNetCore"
      Version="1.1.2" />
</ItemGroup>

to

<ItemGroup>
   <PackageReference Include="Microsoft.AspNetCore.All"
      Version="2.0.0" />
</ItemGroup>
Note: The package AspNetCore.All is a meta package reference that will provide you with many of the related ASP.NET Core packages rather than having to include them all as a separate reference.

Now, once you’ve done this, go ahead and upgrade any of your other packages as needed. However, keep in mind that, in some cases, it may be better to stick with the lowest version possible.

The next thing, while we’re looking at the package references, is to upgrade any CLI tool references as needed. For example, you may be using Entity Framework Core while at the same time using the CLI to create migrations and update your database. This update would look something like this…

<DotNetCliToolReference Include
   "Microsoft.EntityFrameworkCore.Tools.DotNet"
   Version="2.0.0"/>

Moving away from the csproj, let’s turn our attention to the Program.cs. A 1.1 app would have code that looks something like this:

public static void Main(string[] args)
{
   var host = new WebHostBuilder()
      .UseKestrel()
      .UseContentRoot(Directory.GetCurrentDirectory())
      .UseIISIntegration()
      .UseStartup<Startup>()
      .UseApplicationInsights()
      .Build();

   host.Run();
}

In 2.0, this has been simplified and now looks something like this…

public class Program
{
   public static IWebHost BuildWebHost(string[] args) =>
      WebHost
      .CreateDefaultBuilder(args)
      .UseStartup<Startup>()
      .Build();

   public static void Main(string[] args)
   {
      BuildWebHost(args).Run();
   }
}

This new format is recommended and even required in some situations.

Finally, let’s make a small change to your global.json to target the new SDK. If your project is missing one, as is the case with some templates for new projects, you can use Figure 2 as a reference and place it.

The global.json file
Figure 2: The global.json file

If you only have the 2.0 SDK installed, I find that leaving out the global.json is an option and it will, of course, default to the only available SDK on your system.

Now, the content of the global.json looks like this…

{
   "sdk": { "version": "2.0.0" }
}

At this point, you should be able to build the application, and run it. I’ve edited my final middleware component in the Statup.cs to look like this:

app.Run(async (context) =>
{
   await context.Response.WriteAsync("Hello World from
      Code Guru!");
});

The application running under ASP.NET Core 2.0
Figure 3: The application running under ASP.NET Core 2.0

Conclusion

Given that we used only a simple application to demonstrate the migration project here, I’ve completed the steps shown here on much large projects without issue. However, as we said at the start, I would recommend taking the needed time to decide whether the upgrade is needed, but with all new projects I would heartily recommend starting with .NET Core 2.0.

If you have any questions about this article, you can always find me on Twitter @GLanata.

The post Migrating to ASP.NET Core 2.0 appeared first on CodeGuru.

]]>
Capturing User Input in Unity3D to Change Behavior/Movement https://www.codeguru.com/csharp/capturing-user-input-in-unity3d-to-change-behavior-movement/ Fri, 07 Jul 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/capturing-user-input-in-unity3d-to-change-behavior-movement/ Although it isn’t important to the content of this article, you may—or may not—have been following along with my series of Unity3d articles. In these articles, we covered creating a scene, animation, gravity, and building a UWP application from Unity3D. Reading the previous articles isn’t necessary to follow this article; however, what you’ll learn here […]

The post Capturing User Input in Unity3D to Change Behavior/Movement appeared first on CodeGuru.

]]>
Although it isn’t important to the content of this article, you may—or may not—have been following along with my series of Unity3d articles. In these articles, we covered creating a scene, animation, gravity, and building a UWP application from Unity3D.

Reading the previous articles isn’t necessary to follow this article; however, what you’ll learn here will round up those lessons quite nicely. Today, you’ll look at building a small scene where the elements react to each other in small ways. This interaction will limited for the scope of this article, but the technique used to achieve that interaction will be of great use when expanding your own game.

Building the Unity3D Scene

Start Unity3D. If you don’t have Unity3D, you can download a copy from its Web site.

Note: You will need to create an account if you don’t have one already. For this article, I’m using Visual Studio 2017 for code editing.

Once started, create an empty 3D project, and give it a useful name:

The Unity3D Create project dialogue
Figure 1: The Unity3D Create project dialogue

Be sure to select ‘3D,’ and click the Create project button.

Before looking at adding the elements and scripts to the example scene, it is worth taking a look at the finished product. Figure 2 shows the main view port Plane with two spheres and a camera.

The Unity3D scene as shown once created
Figure 2: The Unity3D scene as shown once created

In the image, you can see the camera’s field of view indicated by the thin lines coming from the camera. If you select the camera, you’ll also see what the camera sees via the preview window at the bottom right of the scene view.

The camera preview window (visible when the camera is selected)
Figure 3: The camera preview window (visible when the camera is selected)

What this camera sees here is very important. It’s what the user will see when the game is running.

There are a number of actions that will occur when the scene shown in Figure 2 is complete. Clicking the left mouse button on the sphere closest to the camera will send it towards the second sphere. The second sphere, using an attached script of its own, will do something when hit by the first sphere.

For this article, the second sphere will be destroyed when the first collides with it. However, a Plane will need to be added so the spheres don’t fall off the bottom of the scene when the game is played.

To add the Plane, select 3D Object→Plane from the GameObjects menu. (In Unity 4, this would be from the Create Object→Plane from the GameObjects menu.)

Adding a Plane to our scene
Figure 4: Adding a Plane to our scene

Once the Plane is created, go ahead and create two spheres, which will need to be positioned. If you look at Figure 5, you’ll see the settings I’ve entered for the first sphere.

Sphere One's starting position
Figure 5: Sphere One’s starting position

The settings for the second sphere, which I have renamed to ‘TargetSphere,’ are show in Figure 6.

TargetSphere's position settings
Figure 6: TargetSphere’s position settings

If you run the scene at this point, you should see the two spheres from the camera’s field of view, but they will not fall due to the gap between them and the Plane below. This is because we need to add a component named ‘Rigidbody’ to each of the spheres. It is this component that will deal with gravity and enable movement later in this article.

To do this, select a sphere—be sure to have the Inspector tab open—and click add component, then physics, and then Rigidbody. Please use Figure 7 as a reference.

Adding a 'Rigidbody' to your selected sphere
Figure 7: Adding a ‘Rigidbody’ to your selected sphere

Do the same for the other sphere.

You may have noticed in Figure 2 that there are two scripts in the scene showing on the assets panel.

The two game scripts
Figure 8: The two game scripts

These two scripts will contain the code to push the first sphere, and add behaviour to the second, ‘Target,’ sphere. To add a script, right-click the assets panel and click Create, then C# Script. I’ve named my two scripts ‘GameScript’ and ‘TargetBehaviourScript.’ The first will be attached to the camera, and the second to the sphere that will be hit; this will be covered in a moment.

Firstly, look at the code in these scripts. The ‘GameScript’ code looks like this:

public class GameScript : MonoBehaviour {

   public float force = 5f;

   // Use this for initialization
   void Start ()
   {

   }

   // Update is called once per frame
   void Update ()
   {

      if (Input.GetMouseButtonUp(0))
      {
         Ray ray = Camera.main.ScreenPointToRay
               (Input.mousePosition);
            RaycastHit hit;

         if (Physics.Raycast(ray, out hit))
         {
            if (hit.collider.name == "Sphere")
            {
               var sphere = hit.collider.gameObject;
               sphere.GetComponent<Rigidbody>().AddForce
                  (new Vector3(0.0f, 0.0f, force),
                   ForceMode.VelocityChange);

            }
         }
      }
   }
}

In this script, which will be attached to the camera, the mouse button 0 (which is usually the left button) is being watched for a press. When it’s been pressed, a ray is drawn from the point of the click through the game scene. From this ray, you want to detect if any colliders are hit; specifically, you are looking for the collider attached to the first sphere, named ‘Sphere,’ being hit by the ray.

If it’s been hit, you know it was the first sphere hit by the click and now you can add some force to make it move across the Plane. For this article, the sphere will only move in one direction. You should be able to work out how to make the sphere travel along the direction of the click in your own good time.

Moving on, look at the code that is to be attached to the second Sphere. It looks something like the following:

public class TargetBehaviourScript : MonoBehaviour
{

   // Use this for initialization
   void Start()
   {

   }

   // Update is called once per frame
   void Update()
   {

   }

   void OnCollisionEnter(Collision collision)
   {
      if (collision.collider.name == "Sphere")
      {
         DestroyObject(this.gameObject);
      }
   }

}

Because 3D objects have a collider added to the component out of the box (by default); you can make use of the OnCollsionEnter to method, which has passed into it the collision object from which you can work out what collided with the object the script is attached to. Then, all you have to do is check to see if it was the first sphere that collided with this second sphere, and then destroy the second sphere if the check comes back as true.

If all has gone according to plan, you now have the code needed to make this scene work. Next, attach the scripts to their respective components. You can use Figure 9 as a reference to do that now.

Attaching a script to a component
Figure 9: Attaching a script to a component

Looking at 1 on Figure 9, drag the ‘GameScript’ to the main camera, indicated by 2. If the action was successful, and you select the camera, you’ll see the script on the list of components added to the object indicated by 3.

Do the same with the ‘TargetBehaviourScript,’ but this time attach it to the ‘TargetSphere.’

And, that’s it! Run the game by pressing the play button found at the top of the Unity desktop; click the first sphere and observe the results.

The game in play mode after the first sphere was clicked
Figure 10: The game in play mode after the first sphere was clicked

Conclusion

What you’ve seen in this article is a very simple implementation of cause and effect. You have a sphere which strikes another, causing it to disappear. Although this is very simple, this is the basis of much of the behaviour you might see in a game; it is enough to open a world of interaction within your own scene.

If you have any questions, you can always find me on Twitter @GLanata

The post Capturing User Input in Unity3D to Change Behavior/Movement appeared first on CodeGuru.

]]>
Integrating Maps into Your UWP App https://www.codeguru.com/csharp/integrating-maps-into-your-uwp-app/ Fri, 26 May 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/integrating-maps-into-your-uwp-app/ Over the last decade and more, mapping services have been made available to developers through various APIs. And, each year, their sophistication grows continually. It’s very impressive what’s available to us now out of the box, and you can be up and running within minutes. However, as the title of this article states, let’s look […]

The post Integrating Maps into Your UWP App appeared first on CodeGuru.

]]>
Over the last decade and more, mapping services have been made available to developers through various APIs. And, each year, their sophistication grows continually. It’s very impressive what’s available to us now out of the box, and you can be up and running within minutes. However, as the title of this article states, let’s look at what we can do with mapping in a UWP (Universal Windows Platform) app.

If you’ve never playing around with UWP apps yet, don’t worry; what we’ll be covering in this article will be kept to the basics. The first thing we’ll need is an installation of Visual Studio 2017.

I’ll be making use of VS 2017; the latest update at the time of this writing is 15.0.26430.6. Some older versions may be useable for this example; however, a lot of what we’ll be looking at here was demonstrated at MS Build 2017. And, unless you’re fully up-to-date, some features may not be available to you. Therefore, I would recommend you perform any updates if needed.

Putting Together the Application

Using the default templates from VS, create yourself a blank UWP application.

Selecting the template from the New Project dialogue
Figure 1: Selecting the template from the New Project dialogue

Once created, you should have a solution that looks something like this…

The empty UWP solution
Figure 2: The empty UWP solution

For the first part of this article, we’ll be focusing our attentions on the MainPage.xaml file, which you can see highlighted in Figure 2. Open it up, and let’s make a small addition to our XML namespace listings at the top of the file.

<Page
   x_Class="Mapping.Example.MainPage"
   
   xmlns_x="http://schemas.microsoft.com/winfx/2006/xaml"
   xmlns_local="using:Mapping.Example"
   xmlns_d="http://schemas.microsoft.com/expression/blend/2008"
   xmlns_mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
   xmlns:mapControl="using:Windows.UI.Xaml.Controls.Maps"
   mc_Ignorable="d">

   <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">

   </Grid>
</Page>

The line I’ve added is the one left highlighted (in bold), beginning with xmlns:mapControl. Once added, we can go ahead and add the mapping control to our XAML. Use the following code as a guide…

<Grid Background="{ThemeResource
   ApplicationPageBackgroundThemeBrush}">
   <mapControl:MapControl/>
</Grid>

At this point, we have a semi-functional UWP app that will give you a useable map. Go ahead and run the app, on the local machine, and observe the result.

The UWP mapping app running
Figure 3: The UWP mapping app running

This very simple application proves to us how easy it is to get going. Also, you may have noticed we haven’t added any NuGet packages during our initial setup. We don’t need to; everything we need is given to us out of the framework.

If you want to take a moment to play around with the map, you’ll find it responds to input such as mouse, keyboard, and touch for zooming and panning. Again, this is without us having to do anything extra at this point.

However, if you’ve worked with UWP before, you’ll probably be aware that you can access the device’s geo-location—assuming, of course, the user has given the needed permissions. But, if we did have this location, how might we set up our app to zoom the map to this location on start-up? Let’s take a look…

We’ll need to name our MapControl to make it available to us in the code behind. I’ve made this change to the XAML.

<Grid Background="{ThemeResource
   ApplicationPageBackgroundThemeBrush}">
   <mapControl:MapControl x_Name="mapControl"/>

</Grid>

Now, we’ll need to create an event handler for our MapControls‘s ‘Loaded’ event in the mainpage.xaml.cs file. And, in that event, we’ll zoom the map in using the MapControl.zoomLevel property, and set the geo-location using the map’s Center property. Take a look at the next code segment…

void Map_Loaded(object sender, RoutedEventArgs args)
{
   mapControl.Center = new Geopoint(
      new BasicGeoposition()
         { Latitude = 0, Longitude = 30 });

   mapControl.ZoomLevel = 10;
}

Now, I’ve set the Center of our map to a geo-location of latitude 0 and longitude 30. Off the top of my head, these co-ordinates should give us a location somewhere in Africa. Do note, the value types of latitude and longitude is double. And, the values are restricted to -90—90 for latitude, and -180—180 for longitude. If you’re familiar with mapping and co-coordinates, these ranges will be familiar to you for very good reasons.

Once we have our event handler created, attach it to your control in the XAML like so…

<Grid Background="{ThemeResource
      ApplicationPageBackgroundThemeBrush}">

   <mapControl:MapControl x_Name="mapControl"
                          Loaded="Map_Loaded"/>

</Grid>

Run the app, and let’s see where we end up…

The map zoomed in on the location at start-up
Figure 4: The map zoomed in on the location at start-up

As we can see in Figure 4, we’ve ended up at a location in Africa, Zooming out might give us a better idea where that is in relation to other areas in the country.

Conclusion

What we’ve covered he is just the little bit on the top of the cake. Just by exploring the properties available on the MapControl, it’s immediately visible there is a lot you can do with it. Go forth and experiment, and if you have any questions I’ll be around on Twitter @GLanata.

The post Integrating Maps into Your UWP App appeared first on CodeGuru.

]]>
Taking Control of Gravity on Unity3D https://www.codeguru.com/dotnet/taking-control-of-gravity-on-unity3d/ Fri, 19 May 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/taking-control-of-gravity-on-unity3d/ One of the most fun things to play with in Unity is gravity. Even when things go wrong, it’s more than amusing to see the effects cause by failure. I would go as far as saying that, at times, I’ve wanted something to go wrong just to add that bit extra spice to the day. […]

The post Taking Control of Gravity on Unity3D appeared first on CodeGuru.

]]>
One of the most fun things to play with in Unity is gravity. Even when things go wrong, it’s more than amusing to see the effects cause by failure. I would go as far as saying that, at times, I’ve wanted something to go wrong just to add that bit extra spice to the day. Anyway, let’s look at creating a simple scene in Unity, using some simple objects, in which we’ll look at how to make use of, and control gravity.

For this article, I’m using Unity version 5.6.0f3 Personal. And, for editing C# scripts, I’ll be using Visual Studio 2017. However, anything that can be used to edit a C# file will be fine for scripts.

You can download both via the following links…

Once downloaded, if you don’t have them installed yet, you may be asked to update the Visual Studio plug-in for Unity3D. It doesn’t take long to install. Once you’re ready, we can begin.

Creating the Scene

The scene we’ll be using is a very basic one. There’ll be four game objects; the first two will be there by default. These are the camera and a directional light. The second two we’ll add; but, before we do that, let’s create that scene.

Starting up Unity3D, we’re presented with a dialogue box that looks like Figure 1.

The dialogue box shown on start-up
Figure 1: The dialogue box shown on start-up

Using the ‘NEW’ button underlined in red, create yourself a new project. You then will be presented with this screen…

Setting up the new project
Figure 2: Setting up the new project

Be sure to select ‘3D’ for the project type, and of course choose a location of your choice. You can enable or disable analytics at your discretion, but we won’t be doing anything with analytics during this article.

Create the project, and you’ll arrive at the place where we can dig in and start doing stuff with our scene. On starting a new project, we can see something like what we see in Figure 3…

The newly created scene, which has been saved and named 'Scene One'
Figure 3: The newly created scene, which has been saved and named ‘Scene One’

Looking at the hierarchy panel, which is identified by the red line in Figure 3, we can see we have the two objects we mentioned earlier. These two objects are fundamental to the scene, so don’t delete them. Now, let’s add our two custom objects.

Creating a Plane object
Figure 4: Creating a Plane object

Using Figure 4 as a guide, right-click the hierarchy panel and create a Plane object, as show. Then, do the same and create a Cube.

If everything went to plan, we’ll have a scene that now looks like this…

The scene with a newly created Cube and Plane objects
Figure 5: The scene with a newly created Cube and Plane objects

The cube, however, is a little low, because we want this to appear above the Plane. The idea is that the Cube will fall under the effect of gravity, and the Plane will catch the cube. So, first, let’s raise that cube a little. Using the mouse, grab hold of the yellow arrow pointing upwards, and drag it up.

The Cube raised above the Plane
Figure 6: The Cube raised above the Plane

Now, both objects by default should have colliders added to their object tree; this can be viewed by selecting an object and opening the inspector panel on the right-hand side of the environment. But, if we were to play the scene now, using the play button shown at the top of Figure 6, we would see the game objects just remain static in our scene.

What we need to do is add the Rigidbody component to our cube. Using Figure 7 to guide you, select the cube and press the ‘Add Component’ button on the inspector.

The add component drop-down on the inspector panel
Figure 7: The add component drop-down on the inspector panel

If you can’t see the Rigidbody component on opening the ‘Add Component’ drop-down, you can find it under Physics. Once done, simple start your scene again by using the play button and observe the results.

By default, Rigidbody has ‘Use Gravity’ enabled. That, and working with the colliders added by default to your 3D objects, enabled the Plane to catch the falling Cube, and stop it from leaving the scene.

The scene in play more, showing the Cube caught by the Plane
Figure 8: The scene in play more, showing the Cube caught by the Plane

Adding a Little C#

Now we have our scene, and gravity is visible working. Let’s now add a little code to do something with our scene at runtime.

Right-click the Assets panel, which by default is at the bottom of your environment. Click Create -> C# Script (see Figure 9).

A C# script added to our assets
Figure 9: A C# script added to our assets

Name the file anything you like; its name isn’t important for this demonstration. However, before we edit the code, drag and drop the script on to the Main Camera object show on the hierarchy view. Because the camera is present through the life time of the scene, it’s a good place to run scripts from. And, simply dragging and dropping the script onto a game object will bring that script into play during runtime.

Now, let’s look at some code. Opening the script, my code looks like this…

public class GravityControl : MonoBehaviour {

   GameObject _plane;
   // Use this for initialization
   void Start () {
      _plane = GameObject.Find("Plane");
   }

   // Update is called once per frame
   void Update () {

      if (Input.GetKey(KeyCode.DownArrow))
      {
         _plane.transform.Translate(Vector3.down *
            Time.deltaTime);
      }
      if (Input.GetKey(KeyCode.UpArrow))
      {
         _plane.transform.Translate(Vector3.up *
            Time.deltaTime);
      }

   }
}

At the top of our class, we can declare a field which we’ll use to keep a reference to the Plane object. Then, as the Start method is called, which will happen at the start of the scene, we can use GameObject.Find(“Plane”) to get that object.

Next, in the Update() method, we are going to look for the down arrow and up arrows on the keyboard being held down. And, on either being held, we’ll move the Plane accordingly. Once you’ve entered your code, save your script and run the scene again.

This time, you should be able to move the Plane up or down, and push or allow the cube to fall, depending on which way you move the Plane.

Conclusion

What we’ve talked about in this article is a very simple getting started guide to gravity; but, from this article, I would recommend experimenting with what else you can do in C# to control your scene. Could you place another object that then pushed your cube off the end of the Plane? Even have a series of objects to then catch it as it falls. How about using a sphere and creating a track for it to roll down? As I said at the start, there’s quite a lot of fun to be had with just what we’ve seen here today.

If you have any questions, please tweet me on Twitter @GLanata.

The post Taking Control of Gravity on Unity3D appeared first on CodeGuru.

]]>
Deploying .NET Core Apps to the rPi 2 https://www.codeguru.com/iot/deploying-net-core-apps-to-the-rpi-2/ Mon, 08 May 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/deploying-net-core-apps-to-the-rpi-2/ If you’ve been following along with the development of Windows 10 IoT Core, deploying applications to your Raspberry Pi 2 running the OS was focused on UWP. The quickest way to deploy was, of course, through Visual Studio, which offered features to see that goal accomplished successfully. However, if you tried to drop your rPi […]

The post Deploying .NET Core Apps to the rPi 2 appeared first on CodeGuru.

]]>
If you’ve been following along with the development of Windows 10 IoT Core, deploying applications to your Raspberry Pi 2 running the OS was focused on UWP. The quickest way to deploy was, of course, through Visual Studio, which offered features to see that goal accomplished successfully.

However, if you tried to drop your rPi 2 app on to a console application, you would be met with a number of complications. Over time, people have indeed managed to do the above, but corners needed to be cut and challenges faced. And really, it was quite a bit of work to run something that was potentially very small.

Bringing the context of this article up to the present day, we find ourselves with a much simpler and up-to-date option. After the appearance of .NET Core 2, which is still in preview, the whole process is now much easier.

If you would navigate to https://github.com/dotnet/cli, you will find the link to download the .NET Core 2.0.0 SDK, which, at the time of this writing, is version 2.0.0-preview2-005889 and shown in Figure 1.

The links to download .NET Core SDK
Figure 1: The links to download .NET Core SDK

Go ahead and download the SDK if you don’t already have it, and then we’ll get started building a console application that we can run on the rPi 2.

Installing Windows 10 IoT Core

Luckily, the folks at Microsoft have done a lot of work in making the installation of Windows 10 IoT Core quite simple. The first bit of software you’ll need on your development system is the Windows IoT Core Dashboard, which is a Windows app, and you can download that from here.

Also, more information on Windows 10 IoT Core can be found through this link.

You’ll also need an SD card to install the operating system on to, which mostly likely would have arrived with your rPi.

If you’re ready to go, you can run the IoT Core Dashboard, which looks something like what we see in Figure 2.

The Windows IoT Core Dashboard running
Figure 2: The Windows IoT Core Dashboard running

Now, Microsoft presents us with a very handy getting started guide, which I suggest following; but, for our needs, we only need to go as far as Step 3 because we won’t be deploying a UWP app to your Pi. Do note that when you run IoT Core for the first time, it does take sometime to start up, and I might suggest connecting a monitor so you can observe the progress. Also, please leave your IoT Core Dashboard open through the remainder of this article; there are several features we’ll be using.

Building the .NET Core Application

Let’s come away from the Pi for a moment, and quickly put together the console application we’ll run from the Pi. I’m using Visual Studio 2017, which is required for the version of .NET Core we’ll be using.

We’ll also be creating a self-contained console application, which removes the need to install the .NET Core runtime on the Pi itself. Everything our application needs to run will be packed with the application when we publish it.

Now, go ahead and create a .NET Core application. You can use Visual Studio’s project templates if you want, but, if you, do we’ll need to make some changes to the csproj file because that project make targets an earlier release build of the framework.

Once your project has been created, right-click the project name and click edit *.csproj (see Figure 3).

Editing the csproj file
Figure 3: Editing the csproj file

If your project targets an older .NET Core framework, you may have a *.csproj file that looks like this…

<Project Sdk="Microsoft.NET.Sdk">

   <PropertyGroup>
      <OutputType>Exe</OutputType>
      <TargetFramework>netcoreapp1.1</TargetFramework>
   </PropertyGroup>

</Project>

Let’s change that XML to look more like this…

<Project Sdk="Microsoft.NET.Sdk">

   <PropertyGroup>
      <OutputType>Exe</OutputType>
      <TargetFramework>
         netcoreapp2.0
      </TargetFramework&gt
      <RuntimeFrameworkVersion>
         2.0.0-beta-001745-00
      </RuntimeFrameworkVersion>
      <RuntimeIdentifiers>
         win8-arm
      </RuntimeIdentifiers>
   </PropertyGroup>

</Project>

There’s two more lines added (they’re highlighted in yellow), so let’s examine them first. The RuntimeFrameworkVersion element is used to focus on what version of the framework we’ll be using. This can greatly reduce the amount of code packed with our application, and ultimately gives us a better start-up time.

The RuntimeIdentifiers element is one of the more important ones for our use. Because we will be running this application on the Pi, we need a way of telling the compiler we need to build for the ARM chipset; it is not x86-based, as we are more used to on our dev systems. This element does this for us, and, for our needs, win8-arm is the choice identifier to use here.

Because this is a basic console application, let’s have it output a simple bit of text that will give us a visual indicator it worked when we run it. I’ve edited the code in our program.cs file to reflect what we have here…

class Program
{
   static void Main(string[] args)
   {
      Console.WriteLine("This message came from
         the other side, the side of the Pi!");
   }
}

Now, let’s build our application.

Open the command window, or powershell window, at the root of your project and type…

dotnet restore
dotnet build
dotnet publish -c Release -r win8-arm

If everything was successful, you should find in the bin/Release folder of your project another folder named netcoreapp2.0, which in turn has a folder named win8-arm, shown in Figure 4.

The result of the netcore publish
Figure 4: The result of the netcore publish

Now, the publish folder—underlined in Figure 4—is the folder we need to bring our attention to. It is everything in that folder which we’ll be copying to our Pi. Everything else in the win8-arm folder can be ignored.

So, let’s copy the published project to our Pi. Using the Windows IoT Core Dashboard we left open earlier, go to My Devices, and right-click your device shown in the list, just as we are shown in Figure 5, and select Open network share.

Opening a network share from the IoT Daskboard
Figure 5: Opening a network share from the IoT Daskboard

You will be prompted to enter your credentials, where, once done, you’ll see an Explorer window opened at the root of you rPi. From here, let’s create a folder named apps, then inside that folder another folder named something like testapp. This test app folder is where we’ll copy our application (see Figure 6).

The created apps folder at the root of our rPi
Figure 6: The created apps folder at the root of our rPi

All that remains now is to copy our published application to our apps/testapp folder. Just do a straight copy and paste. And, make sure to do so just on those files found in the publish folder. Then, once you’re done, go back to the IoT Dashboard, my devices, right-click your Pi, and select Launch PowerShell.

Again, you’ll be asked to enter your admin credentials; but, once done, we can use PowerShell to navigate to the folder we named testapp; and type the following command…

./IotTestApp.exe

If you named your project something other than what was used in this article, please go ahead and amend the naming as needed. If all worked well, you should see a result that looks something like what’s shown in Figure 7.

The PowerShell window used to run our console application
Figure 7: The PowerShell window used to run our console application

As we can see, and running from Windows 10 IoT Core on the Raspberry Pi 2, we have the output which reads ‘This message came from the other side, the side of the Pi!’ just as was specified in the program.cs of our application.

Conclusion

In the earlier days of Windows 10 IoT Core, the only applications we could run were UWP. Yes, you could run other stuff, but it often required quite a bit of work to do so.

Now, you can run non-UWP applications quite easily, and I would guess increasingly easier as .NET Core 2.0 develops further. If you have any questions about this article, you can find me on Twitter @GLanata, and have fun with your Pi!

The post Deploying .NET Core Apps to the rPi 2 appeared first on CodeGuru.

]]>
Looking at Generalized Async Return Types https://www.codeguru.com/dotnet/looking-at-generalized-async-return-types/ Fri, 28 Apr 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/looking-at-generalized-async-return-types/ One of the number one features to arrive in C# 7 for me is the addition of the ValueTask to our asynchronous tool box. But, before we look at some code showing ValueTask in action, let’s first address the problem it solves. To demonstrate, I’ve put together an experiment which, in a small way, shows […]

The post Looking at Generalized Async Return Types appeared first on CodeGuru.

]]>
One of the number one features to arrive in C# 7 for me is the addition of the ValueTask to our asynchronous tool box. But, before we look at some code showing ValueTask in action, let’s first address the problem it solves.

To demonstrate, I’ve put together an experiment which, in a small way, shows a visible difference between the problem and cure. I’ll be using a standard .NET console application which will require the following the nugget package…

System.Threading.Tasks.Extensions

I’m also using Visual Studio 2017.

The Problem

Take a look at the following code. Then, we’ll identify where the problem lies…

static void Main(string[] args)
{
   var data = GetDataFromTask();

   Console.WriteLine(data.Count());
   Console.WriteLine("complete");
   Console.ReadLine();
}

static IEnumerable<int> GetDataFromTask()
{
   Console.WriteLine("Data From Task");
   for (int i = 0; i < 1000000000; i ++)
   {
      yield return Task.FromResult(i * 10).Result;
   }
}

Now, whatever our GetDataFromTask method may do, we want to return a value type—in this case, an int—from the Task.FromResult within; this presents the problem. The value type is being wrapped in a Task which, of course, is a reference type, and, therefore will be treated accordingly by memory management.

In loops like we just saw, especially when the task is being run synchronously, an impact to performance could be observed. If the amount of data being stored in memory is considerable, you also may see further performance issues when the garbage collector moves to complete its process.

Let’s run the preceding code in debug mode, and we observe results that look something like what we see in Figure 1.

Our code running, with a Task returned
Figure 1: Our code running, with a Task returned

In Figure 1, we can see that garbage collection did indeed occur once at around 10 seconds into the run, designated by the yellow marker, and there was just below 14MB of process memory in use. Furthermore, the loop took just short of 20 seconds to complete.

Using ValueTask

Do note that this example uses code that doesn’t reflect what you would find in the real world. But, let’s now go ahead and add the code from the following…

static IEnumerable<int> GetDataFromValueTask()
{
   Console.WriteLine("Data From ValueTask");
   for (int i = 0; i < 1000000000; i++)
   {
      yield return new ValueTask<int>(i * 10).Result;
   }
}

Instead of using Task.FromResult, we’ll now use ValueTask<int>; and compute the result as we did previously. Now, let’s run this code, and observe any differences we may see.

The code running making use of ValueTask
Figure 2: The code running making use of ValueTask

The first observation we can make here is that the process memory in use is sitting below 11 MB, and there was no garbage collection during this run. Secondly, even through it’s only a small amount, the time taken to complete the run was shorter.

As said, we’re using code in this example that doesn’t reflect production code; but, if you were to put this in the context of long running and tightly looped async code, the possible benefits become clear. I’ve also used Task.FromResult and ValueTask<int> inline within the called methods. If you’re familiar with Task, you also can write code like what we see next.

async ValueTask<int> DoSomeWorkAsync()
{
   // async code
}

ValueTask also gives us a constructor with the parameter that takes a task. This gives us the ability to construct a ValueTask from any other async method we may have; for example:

void DoSomethingAboutThis()
{
   var valueTask = new
      ValueTask<bool>(DidItWorkAsync());
}

async Task<bool> DidItWorkAsync()
{
   await Task.Delay(10);
   return true;
}

And finally, for the sake of completeness, let’s run the code from above using a standard synchronous method. This is what we would use for such code under normal circumstances…

The results from running the loop using standard synchronous code
Figure 3: The results from running the loop using standard synchronous code

Conclusion

It took me a moment to put together a small example that was easy to follow along for this article and produce results that were visibly different. I would strongly recommend experimenting with ValueTask, and doing so before using it in production code you verify if it’s useful to you. That said, if it is useful, it can be very useful indeed!

If you have any questions on this article, you can find me on Twitter @GLanata.

The post Looking at Generalized Async Return Types appeared first on CodeGuru.

]]>
Understanding C# Tuples https://www.codeguru.com/dotnet/understanding-c-tuples/ Wed, 05 Apr 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/understanding-c-tuples/ If I were to draw up a list of the top debated and questionable features of C# prior to C# 7, tuples would be on that list. Being a part of the developer community, I can attest that tuples have indeed come up in conversation many times over the years. Do you use them? Should […]

The post Understanding C# Tuples appeared first on CodeGuru.

]]>
If I were to draw up a list of the top debated and questionable features of C# prior to C# 7, tuples would be on that list. Being a part of the developer community, I can attest that tuples have indeed come up in conversation many times over the years.

Do you use them? Should you use them? If you do use them, are they genuinely helpful and would you stick by them? Or, would you prefer that any given code base you’re working on to be completely tuple free, and you recoil at the sight of them!?

There are several questions there, and questions I can’t answer, but only for myself.

However, with the arrival of C# 7, tuples have had quite a bit of work done on them.

Before we go into some code, and, depending on your version of Visual Studio, you may need to grab the following package from NuGet…

System.ValueTuple

This is required for Visual Studio 15 Preview 5 and earlier releases.

Let’s Code!

Before we look at this new feature of C# 7, let’s quickly remind ourselves what the tuples of yesterday look like…

static void Main(string[] args)
{
   Tuple<int, string> anOldTuple =
      new Tuple<int, string>(10, "count");
}

In this short code snippet, we can see how to bring to life a standard tuple, with an int and a string passed in through the constructor on being instantiated.

Once this tuple has been instantiated, its fields are read only. And, the naming of those fields would be something like Item1, Item2 and so on, which we can see in Figure 1.

A standard tuple
Figure 1: A standard tuple

Given the naming, the field doesn’t really give us any clues as to what meaning any data stored in those fields might have. We therefore come to one of the primary concerns with using tuples. But, on the other hand, is this potentially better than the overhead of writing a class or struct for very simple data transfer operations?

Now, let’s compare the tuple we saw earlier, with the new tuple…

static void Main(string[] args)
{
   (int value, string name) newTuple = (10, "count");
}

We can see from the code above that firstly, declaring our tuple appears much cleaner. But, what does accessing fields in this tuple look like…?

Access fields on tuple 2
Figure 2: Access fields on tuple 2

Our fields now have much more meaning as relevant naming is present. However, do note that the original naming convention can re-appear if the keyword var is used; this is demonstrated below…

static void Main(string[] args)
{
   var newTuple = (10, "count");
}

New tuples with old field naming when var is used
Figure 3: New tuples with old field naming when var is used

So, given what we’ve looked at so far, would you say you’ll be using tuples more often? For myself, I would say yes. And this is where, by design, they are ideally suited to appear—the return value on a private or internal method…

class Program
{
   static void Main(string[] args)
   {
      var result = GetData(new int[6] { 1, 3, 5, 6, 11, 20 });
      Console.WriteLine($"count : {result.count}");
      Console.WriteLine($"total : {result.total}");
      Console.WriteLine($"first value : {result.first}");
   }

   private static (int count, int total, int first)
      GetData(IEnumerable<int> values)
   {
      int count = values.Count();
      int total = values.Sum();
      int firstValue = values.First();

      return (count, total, firstValue);
   }
}

From the preceding code, we can avoid the work of creating a type just for this method, which is internal to the class and only called once. At the same time, we can keep meaningful naming of the fields, thus keeping our code readable.

And, here is the result of the previous code running…

The output from our private method, which returns a tuple
Figure 4: The output from our private method, which returns a tuple

Speaking for myself, this is especially useful when returning multiple values via Task<T>, like so…

private static async Task<(int min, int max)>
   GetMinMax(IEnumerable<int> values)
{
   await Task.Delay(1000);
   return (values.Min(), values.Max());
}

Because I find myself often working with asynchronous methods for UI-related work, where such methods cannot have an out parameter, this is a boon on many levels.

Conclusion

There are many features in C# 7 which can help you in your day-to-day coding duties, and this is definitely one I’ll be making increased use of, and I hope it can help you too. As always, if you have any questions, you can find me on Twitter @GLanata.

The post Understanding C# Tuples appeared first on CodeGuru.

]]>
Using C# 7 Local Functions https://www.codeguru.com/dotnet/using-c-7-local-functions/ Fri, 31 Mar 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/using-c-7-local-functions/ Over the years, I’ve created many classes where the number of methods used only by that class grew to quite a number, even though those methods were directly related to the purpose of the class. Even with comments, good naming, and other such devices, the class was increasingly difficult to understand at first glance. It’s […]

The post Using C# 7 Local Functions appeared first on CodeGuru.

]]>
Over the years, I’ve created many classes where the number of methods used only by that class grew to quite a number, even though those methods were directly related to the purpose of the class. Even with comments, good naming, and other such devices, the class was increasingly difficult to understand at first glance.

It’s also worth noting that, in several instances, those methods were only called by a single method elsewhere in the class; and, looking back at some of that code, this is a very true state of affairs.

So, does C#7 bring anything to help alleviate this problem? The answer to that question is yes! C# 7 will bring to us the use of local functions.

Consider the following scenario. Using MVVM, I have my view model which will make up a part of a UWP app. The View Model is ultimately the piece of the puzzle that glues everything together; a method could be responsible for gathering pre-processed data before displaying it on the view.

I might have some code that looks something like this…

class Program
{
   static IEnumerable<string> YAxis { get; set; }
   static IEnumerable<string> XAxis { get; set; }
   static IEnumerable<double> Data { get; set; }

   static void Main(string[] args)
   {
      GetData();
   }

   static void GetData()
   {
      YAxis = GetYAxisLabels();
      XAxis = GetXAxisLabels();
      Data = GetDataPoints();
   }

   static IEnumerable<string> GetYAxisLabels()
   {
      for(int i = 0; i < 10; i++)
      yield return $"YLabel {i}";
   }

   static IEnumerable<string> GetXAxisLabels()
   {
      for (int i = 0; i < 20; i++)
      yield return $"XLabel {i}";
   }

   static IEnumerable<double> GetDataPoints()
   {
      Random rand = new Random();
      for (int i = 0; i < 20; i++)
      yield return rand.NextDouble();
   }
}
Note: I’m using a console application to simulate how it might look in a more complicated UWP app, but we can see from this code the class could become quite long, where many of the methods are called from only one method. However, we are using those methods to separate out code and responsibility.

How might this look using local functions? Let’s look at this next piece of code…

static void GetData()
{
   YAxis = getYAxisLabels();
   XAxis = getXAxisLabels();
   Data = getDataPoints();

   IEnumerable<string> getYAxisLabels()
   {
      for (int i = 0; i < 10; i++)
      yield return $"YLabel {i}";
   }

   IEnumerable<string> getXAxisLabels()
   {
      for (int i = 0; i < 20; i++)
      yield return $"XLabel {i}";
   }

   IEnumerable<double> getDataPoints()
   {
      Random rand = new Random();
      for (int i = 0; i < 20; i++)
      yield return rand.NextDouble();
   }
}

From the preceding code sample, the lines of code we used have haven’t really changed, but, we can very quickly see that the local functions are directly related to the method in which they reside. This is one of the primary design concepts of local functions, and can allow us to very quickly understand a class, especially if it’s one we are creating and others may need to understand it. Adding to this, we also can say that the local function cannot be accidentally called elsewhere in the class.

If you want to see some output from the previous code, I’ve added these few lines here…

YAxis = getYAxisLabels();
XAxis = getXAxisLabels();
Data = getDataPoints();

foreach(var d in Data)
   Console.WriteLine(d);

Which should give us an output like what is seen in Figure 1.

Output from our local functions example
Figure 1: Output from our local functions example

As I’m sure you’ve noticed, I’ve declared the local functions after they are called. With local functions, this is acceptable, and it’s one of the features that separates local functions from lambdas.

Take this next code sample, for example…

static void RecursiveTest()
{
   Action printMessageDelegate = () => {
      Console.WriteLine("Delegae called");
   };

   printMessageDelegate();

   printMessageLocalFunction();

   void printMessageLocalFunction()
   {
      Console.WriteLine("Local function called");
   }
}

The printMessageDelegate must be defined before it is called; but, below that, we can see our local function working happily even though it was defined after its point of call. There are other differences between lambdas and local functions that may interest you, but the primary one is that of performance. If it can be done with a local function, you’ll see the performance benefits where high performance code is required; you’ll be rewarded for using them.

Conclusion

After speaking with a number of other developers, I’ve determined that local functions are a very welcome addition to the C# language, and one that I hope will assist your good self in your day to day code duties.

If you have any questions about this article, please find me on Twitter @GLanata.

The post Using C# 7 Local Functions appeared first on CodeGuru.

]]>
Getting Started with Visual Studio 2017 and ASP.NET Core https://www.codeguru.com/dotnet/getting-started-with-visual-studio-2017-and-asp-net-core/ Wed, 22 Mar 2017 07:15:00 +0000 https://www.codeguru.com/uncategorized/getting-started-with-visual-studio-2017-and-asp-net-core/ I’ve been holding off the installation of Visual Studio 2017 for as long as I could; mostly in part due to ongoing projects where I can’t risk something not working. However, curiosity has taken hold and the installation has begun. Therefore, I dedicate this short article to those of us who are, or going to […]

The post Getting Started with Visual Studio 2017 and ASP.NET Core appeared first on CodeGuru.

]]>
I’ve been holding off the installation of Visual Studio 2017 for as long as I could; mostly in part due to ongoing projects where I can’t risk something not working. However, curiosity has taken hold and the installation has begun. Therefore, I dedicate this short article to those of us who are, or going to continue to, hold off installation until more information is made available on stability and general operation.

The first noticeable change to this version of the very popular IDE is the dialogue shown in Figure 1 to select what features of VS you wish to install.

The Visual Studio installation main screen
Figure 1: The Visual Studio installation main screen

Before we go on, if you’re used to what used to be there, the ‘Individual Components’ tab at the top will present to you a view that is more of what we used to see. An example is shown in Figure 2.

Selecting Individual Components
Figure 2: Selecting Individual Components

From the first tab, ‘Workloads,’ I’ve made sure I’ve ticked the .NET Core cross-platform development, which we’ll use to put together a bit of code to conclude this article. As for the installation, the whole process came across as very smooth with a required restart at the end. Such restarts are common to those of us in the IT industry.

Anyway, once installed, I’ve gone ahead and created an empty ASP.NET Core project. That project looks something like what we see in Figure 3…

The empty ASP.NET Core project
Figure 3: The empty ASP.NET Core project

This is the first time I’ve put together a .NET Core project that does not use project.json; that is now a relic of history. Instead, we are back to csproj and one thing I’m quite happy about is being able to edit that csproj file without having to unload the project first. After right-clicking the project, then hitting Edit *.csproj, I’m presented with what we see in Figure 4.

The .csproj file
Figure 4: The .csproj file

It is indeed as they said it would be, and that’s a lot tidier than it used to be. For those of your who want to bring your MSBuild skills to .NET Core, this shift away from project.json represents the opportunity for you to do that! However, in the interest of diplomacy, I’m going to keep my personal thoughts on the update to myself and leave it to your good self to decide if this is a good move or not. Also, note that I am still using .NET Core 1.1, because some updates are still pending on my side.

Some Code

Let’s conclude this article with a nice addition that ships with ASP.NET Core, and that’s the Logger Factory. If you’re running your application from the console, which is a preferred method if you’re not on Windows, you can use the Logger Factory to output useful information at runtime.

Just to be sure we’re on the same page, I’ve kept the default code from the empty ASP.NET template, which can be found in the startup class. And that code looks something like this…

public void ConfigureServices(IServiceCollection services)
{
}

public void Configure(IApplicationBuilder app,
   IHostingEnvironment env, ILoggerFactory loggerFactory)
{
   loggerFactory.AddConsole();

   if (env.IsDevelopment())
   {
      app.UseDeveloperExceptionPage();
   }

   app.Run(async (context) =>
   {
      await context.Response.WriteAsync("Hello World!");
   });
}

In the preceding code, we can see that we have the logger added to your middleware chain by default. So, let’s run this application. Open a command window at the location of the project, and type dotnet run.

The application run from the console window
Figure 5: The application run from the console window

If you then navigate to the URL http://localhost:5000, you’ll find some basic output from your application, which was defined in the custom middleware component defined in your startup.cs.

Output from the application shown in the browser
Figure 6: Output from the application shown in the browser

Now, if your console window is visible to you, you would have seen some activity in that window courtesy of the logger added to your application. After navigating to the localhost URL, my window has given me this output…

Logger output shown in the console window
Figure 7: Logger output shown in the console window

At this time, there is a variety of information the logger will show, and at times this can be too much to take in. But, fear not, you can configure your logger to output only severe information that may be more relevant to you.

Let’s change the code in the Configure method in our startup.cs to look more like this…

loggerFactory.AddConsole(LogLevel.Error);

if (env.IsDevelopment())
{
   app.UseDeveloperExceptionPage();
}

app.Run(async (context) =>
{
   throw new NullReferenceException();
   await context.Response.WriteAsync("Hello World!");
});

What we’ve done above is to set the logger’s log level to LogLevel.Error, which, to quote the documentation, “Logs that highlight when the current flow of execution is stopped due to a failure.” I’ve also thrown an exception inside our custom middleware component, which is of type NullReferenceException. Let’s stop the application by using Ctrl+C, re-build, and re-run. Then, observe the results…

The application running, with log level set to error, and throwing an exception
Figure 8: The application running, with log level set to error, and throwing an exception

Notice that we’ve lost any logged information that was prefixed as ‘info,’ and can see only the error we’ve forced upon the application.

Conclusion

For further experimentation, I would suggest playing around with the log levels and trying to create a variety of errors for your application to deal with. Again, this feature ships with ASP.NET Core out of the box, and is a valuable addition to that box. It can greatly assist with learning more about the ASP.NET Core framework, too.

If you have any questions you’d like to ask about this article, please find me on Twitter @GLanata.

The post Getting Started with Visual Studio 2017 and ASP.NET Core appeared first on CodeGuru.

]]>
Creating Unity3D Animations Using C# https://www.codeguru.com/dotnet/creating-unity3d-animations-using-c/ Wed, 01 Mar 2017 08:15:00 +0000 https://www.codeguru.com/uncategorized/creating-unity3d-animations-using-c/ If you’ve followed along with my previous articles on Unity3D, you’ll know that we’ve created some objects in a 3D scene and interacted with that scene with a little C#. If you haven’t read the other articles, worry not! There’s nothing in the previous articles you’ll need to know to follow along with this one. […]

The post Creating Unity3D Animations Using C# appeared first on CodeGuru.

]]>
If you’ve followed along with my previous articles on Unity3D, you’ll know that we’ve created some objects in a 3D scene and interacted with that scene with a little C#. If you haven’t read the other articles, worry not! There’s nothing in the previous articles you’ll need to know to follow along with this one.

Speaking of this article, let’s look at creating a simple animation, and starting that animation from C#.

Setting Up Our Scene

I’m going to be using Unity3D version 5.4.2f2. To edit scripts, anything that allows you to edit a C# file will be fine.

The first thing we need is a new project in Unity, and, in Figure 1, we can see the settings I’ve selected for this example. You’ll also notice there’s a ‘getting started’ tab on the project setup view; it will lead you some excellent tutorials, examples, and a massive community around Unity to learn from.

Setting up our project
Figure 1: Setting up our project

Next, we need to add a 3D object to our scene, which we’ll animate. One way to do this is from the hierarchy view, which is by default, to the left of the main work area.

Creating a 3D object in our scene
Figure 2: Creating a 3D object in our scene

As we can see in Figure 2, I’ve added a cube to the scene. Next, we need an animation controller, an animation, as well as a C# script. We’ll come to the script last; once we have all elements in our scene, everything else can be done in that script.

In the assets view at the bottom of the scene, let’s create everything else we need by right-clicking the panel then…

  • Create -> Animation controller
  • Create -> Animation
  • Create -> C# Script

Adding the Animation and Animation Controller
Figure 3: Adding the Animation and Animation Controller

Once we have these in our scene, everything should look like what we have in Figure 4…

The elements created
Figure 4: The elements created

Note: The second item from the left, which is represented by the Unity Icon, is just the saved scene.

Before going any further, let’s quickly go over what we are trying to achieve. And, what we are going to do is this. We’ll set up our scene, with the 3D object in the centre—our cube—and when we play the scene, we want that cube to rotate on the left mouse button down event.

Firstly, we’ll create a state, which from code we’ll execute to make the object rotate. To create this state, double-click the animation controller, which is represented with this icon…

The animation control icon
Figure 5: The animation control icon

Once you’ve double-clicked the controller, you’ll have a view that looks like this—minus any states because I’ve gone ahead and created some.

The animation controller with two states pre-created
Figure 6: The animation controller with two states pre-created

The two states I’ve created already are named the ‘Normal’ and ‘MouseDownState’ states. Note that I created the Normal state first and you can create a state by right-clicking the view, then click Create State, and then Empty.

Now, if you look between the two states created, we can see two joining white lines with arrows on them. These are called Tranisations, and to create one, right-click a state and click Make Transition. The state you click is the state the arrow will point away from, and the state you click after starting the creation process is the state the arrow will point to. Therefore, we need to make a transition from each state to the other. Once you have these, there’s a small edit we need to perform on the transition leading from the ‘Normal’ state, to the ‘MouseDownState’. Click the transition and un-check the Has Exit Time option. Figure 7 shows us an example of what you’ll see when you click a transition.

The transition selected and Has Exit Time unchecked
Figure 7: The transition selected and Has Exit Time unchecked

Unchecking the Has Exit option will prevent the ‘Normal’ state from automatically transitioning to the ‘MouseDownState’. Now that we have the basic layout of our controller, let’s create the animation. But, before we move away from the animation controller view, we need to add the animation to the ‘MouseDownState’.

With the state selected, drag the animation we created in our assets to the Motion field shown on the inspector. The animation has this icon…

The Animation icon
Figure 8: The Animation icon

And, the Motion property can be seen here…

The Motion field at the top of the inspect view
Figure 9: The Motion field at the top of the inspect view

Back to our scene view, select the cube in the scene; then drag and drop the Animation Controller onto the cube. Do the same for the script; and then—with the cube still selected—click the add component button at the bottom of the inspector panel. When the components dialogue appears, click Physics, then click Box collider. This box collider will assist us in detecting if the mouse click did indeed hit the 3D object.

If all of the above has gone to plan, you should see these components in the inspector list when you have the cube selected, as shown in Figure 10.

The components added to our 3D object, shown in the inspector panel
Figure 10: The components added to our 3D object, shown in the inspector panel

Now, we’re ready to create a simple rotation animation. From the menu at the top workspace, click Window, and then Animation. The keyboard shortcut for this is Ctrl + 6. If you don’t have your cube selected, go ahead and do so now. Once you do, you should see the animation window become active, with the animation selected. Take a look at Figure 11, then we’ll go over adding a property to animate the rotation.

The animation window, with the property Rotation added to our animation list
Figure 11: The animation window, with the property Rotation added to our animation list

If you click the Add Property button, you’ll see the popup we have above. From there, add the rotation property; then, let’s get to work creating a key frame that will complete the animation. First, expand the property, and move the current time marker to a position of your choice (the red line), and then edit one of the x, y, or z positions like so…

Creating a key frame on our animation time line
Figure 12: Creating a key frame on our animation time line

Once you’ve had a play around with adding key frames to the time line, we can now move on to the C# script, and make our box rotate when we click it.

The C# Script

My script looks something like this…

// Use this for initialization
void Start () {
}

// Update is called once per frame
void Update () {

   // Detect if the left mouse button is down
   if (Input.GetMouseButtonDown(0))
   {
      var ray = Camera.main.ScreenPointToRay
         (Input.mousePosition);
      RaycastHit raycastHit;

      if (Physics.Raycast(ray, out raycastHit, 100))
      {
         // get the collider, which was hit by the ray
         var colliderHit = raycastHit.collider;
         // get the game object the collider is attached to
         var gameObjectHit = colliderHit.gameObject;

         // get the gameObjects animator
         var animator =
            gameObjectHit.GetComponent<Animator>();

         // play the animation
         animator.Play("MouseDownState");
      }
   }
}

And, that’s pretty much it. From the preceding code, we can see it’s possible to get at many of the objects, properties, and anything else we need to build out our game/application.

If you then run the scene, and click the 3D object, we should see some rotation.

The scene running, with visible rotation on the cube
Figure 13: The scene running, with visible rotation on the cube

Conculsion

If you’re just starting out with Unity3D, there’s quite a lot to take in. But, once you’re over the initial information dump, there’s much fun to be had even if you’re just using Unity in your spare time.

If you have any questions on this article, I can be found on Twitter @GLanata.

The post Creating Unity3D Animations Using C# appeared first on CodeGuru.

]]>