In this article, we will see how to create simple screen sharing app using signalR streaming. SignalR supports both server to client and client to server streaming. In my previous article , I have done server to client streaming with ChannelReader and ChannelWriter for streaming support. This may look very complex to implement asynchronous streaming just like writing the asynchronous method without async and await keyword. IAsyncEnumerable is the latest addition to .Net Core 3.0 and C# 8 feature for asynchronous streaming. It is now super easy to implement asynchronous streaming with few lines of clean code. In this example, we will use client to server streaming to stream the desktop content to all the connected remote client viewers using signalR stream with the support of IAsyncEnumerable API.
Disclaimer
The sample code for this article is just an experimental project for testing signalR streaming with IAsyncEnumerable. In Real world scenarios, You may consider using peer to peer connection using WebRTC or other socket libraries for building effective screen sharing tool.
ScreencastR agent is a Electron based desktop application. Electron is a framework for creating native applications with web technologies like JavaScript, HTML, and CSS. It allows you to create desktop applications with pure JavaScript by providing a runtime with rich native (operating system) APIs. In our example, I have used desktopCapturer API to capture the desktop content. if you are new to electron, you can follow this official docs to create your first electron application.
A simple electron application will have following files which is similar to nodejs application.
your-app/ |
The starting point is the package.json which will have entry point javascript (main.js) and main.js will create a basic electron shell with default menu option and load the main html page. (index.html). In this package.json, i have added the dependency of latest SignalR client.
package.json
{ |
When we run the npm build
, which will bring all the dependencies under node_modules folder including signalR client. Copy the signalr.js file from node_modules\@microsoft\signalr\dist\browser
folder into the root folder.
index.html
|
In the index.html page, we have simple layout to get the name of agent and start and stop casting button.
Renderer.js
const { desktopCapturer } = require('electron') |
In the renderer.js javascript, initializeSignalR method would initialize the signalR connection when the application gets loaded and listens to NewViewer and NoViewer hub methods. The NewViewer method gets called whenever the new remote viewer joining to view the stream. The agent will not stream the content until atleast one viewer exists. When NoViewer method called, it will stop the stream.
CaptureScreen method will use the desktopCapturer API to get the list of available screen and window sources and filter to get the “Entire screen“ source only. After the source is identified, screen thumbnail data can be generated from source based on the thumbnail size is defined. CaptureScreen method is based on the promise API and will returns the image data in string as part of the resolve method. We will call the CaptureScreen method in timer (setInterval method) based on the frame per second defined and the output will be streamed via signalR subject class.
ScreenCastR Remote Viewer is a server side blazor app with signalR hub hosted in it. This app also has the interface for signalR client to receive the stream data from hub. Whenever the New Agent joined, it will show the details of agent in the dashboard page with the name of agent and View and Stop Cast button. When the user clicks on the View Cast button, it will start receiving the streaming from hub and render the output on the screen. In the above video, the left side is the agent streaming data to signalR hub and the right side is the viewer rendering the streaming data from the signalR hub.
public class ScreenCastHub : Hub |
ScreenCastHub class is the streaming hub with all methods to communicate between agent and remote viewer.
StreamCastData is the main streaming method which will take IAsyncEnumerable Items and stream the chunk of data that it receives to all the connected remote viewers.
AddScreenCastAgent method will send the notification to all the connected remote viewer whenever the new agent join the hub.
RemoveScreenCastAgent method will send the notification to all the connected remote viewer whenever the agent disconnects from the hub.
AddScreenCastViewer method will send the notification to agent if the new viewer joined to view the screen cast.
RemoveScreenCastViewer method will send the notification to agent if the all the viewer disconnected from viewing the screen cast.
public class ScreenCastManager |
This class will holds number of viewers connected per agent. This class is injected to hub via dependency injection in singleton scope.
services.AddSingleton<ScreenCastManager>();
In startup.cs, increase the default message size from 32KB to bigger range based on the quality of stream output. Otherwise hub will fail to transmit the data.
public void ConfigureServices(IServiceCollection services) |
@using Microsoft.AspNetCore.SignalR.Client |
In this component, as part of OnInitializedAsync method, initialize the signalR client connection with hub and subscribe to streaming method. When the stream data is arrived from hub, it update the image source DOM element and render the screen with the changes.
IAsyncEnumerable is a very nice feature added to .Net Core 3.0 and C# 8 for asynchronous streaming with cleaner and readable code. With this new feature and SignalR streaming, we can do many cool projects like Real Time App health monitor dashboard, Real Time multiplayer games etc… I have uploaded the entire source code for this article in the github repository.
Happy Coding!!!
]]>In this article, we will see how to create a bot vs. multiplayer tic-tac-toe game in blazor. Blazor is an open source .NET web front-end framework that allows us to create client-side applications using C# and HTML. This is a simple asp.net core hosted server-side blazor front-end application with Game UI razor component and signalR game hub to connect players with the bot to play the game. The Game Bot is created with .Net Core Background Service with core game engine to identify the best move against a player using minimax recursive algorithm. The entire source code is uploaded in my github repository.
As a first step, Launch the Latest Visual Studio 2019 and create a new blazor project by selecting Asp.net Core web application and select the Blazor Server App.
I used the Blazor Server Side app for this example, but you can use client-side Blazor as well. Right now, client-side blazor app doesn’t have any official blazor signalR client due to the dependency of web socket support in the runtime. However, there are community version of the blazor signalR client is available.
In the solution explorer, add a new Razor component called TicTacToe.razor file and put the Tic-Tac-Toe board design and logic in the component. It also initializes the signalR hub client.
@using Microsoft.AspNetCore.SignalR.Client |
In this component, we have three layouts. The main layout will render the tic-tac-toe board. The other two layouts will show the result of the winner or draw panel. The main layout is using the bootstrap container to design the board, and each cell is associated with onclick event method to notify the hub with the selected cell value.
@code { |
In OnInitAsync Method, We initialize the board with the default index values. By default, the player will use X symbol and the bot will use O symbol to play.
We will also initialize signalR hub in OnInitAsync method. On click of the Cell, OnSelect method gets executed, and board item array will now have the data with the player move and send the entire board array as a parameter to hub method OnUserMoveReceived. It also listens to NotifyUser Hub method which is invoked by a bot with its move.
public class GameHub : Hub |
This hub class will hold the following signalR methods.
public class Worker : BackgroundService |
The game bot is developed using .Net Core background service; when it started, it will connect to signalR hub. When it joins, it invokes OnBotConnected method to add it into the BOT signalR group. When it receives the message from hub with the board array data, it calculates the next best move by calling GetBestSpot method from the game engine and sends it back to the caller with its move.
When the background service is stopped, it disposes the signalR connection and remove it from BOT group.
public class GameEngine |
I used minimax algorithm in the game engine to find the best available spot. Minimax algorithm is a recursive algorithm which will play all possible movement by itself and as opponent until it reaches the terminal state (win or draw) and then decides the best move from those all possible iteration. You can refer this article to understand more details about minimax algorithm.
Blazor is super useful for .NET developers who are not interested in learning javascript for front-end development. This article shows how easy to develop real time blazor application with signalR. I have used minimax algorithm to identify the best spot available. It will be more interesting to use reinforcement machine learning algorithm for AI to learn and identify based on rewards instead of recursive minimax algorithm. This will be a good use case to try when ML.NET introduce reinforcement learning library.
The entire source code is uploaded in my github repository. Happy Coding.
]]>Microsoft announced the new .NET 5 (future of .NET) in Build 2019 conference. .NET 5 will be the single unified platform for building applications that runs on all platforms(Windows, Linux) and devices (IoT, Mobile).
If you are .NET developer currently supporting enterprise applications developed in .NET framework, you need to know how the .NET 5 is going to affect your current enterprise application in the long run. .Net 5 is based on .Net Standard which means not every .Net framework features will be available in .Net 5. Also, there are some technology stacks like web forms, WCF and WWF is not porting into .Net 5. We will look into the details of what is not covered in .Net 5 and what are the alternatives.
Asp.net Web Forms will not be coming to .NET 5 and Microsoft is currently recommending to move to Blazor which is an experimental project got promoted to official recently. The other alternatives choices are Angular, React and Vue SPA framework if you are good at JavaScript.
If you are currently using Asp.net MVC as full stack web app, you can continue to use the same stack by using Asp.net Core MVC or the new Razor Pages introduced in .net core 2.0 which may look similar to asp.net web forms to build web form application without views and controllers quickly. However, if you are developing modern web applications for enterprise, its better to consider single page application such as blazor, angular or react instead of a traditional web app for providing rich client-side functionality.
The announcement of WCF going to miss the .NET 5 train surprised many including me. There has been a lot of discussion in GitHub to bring back WCF into the .NET core, but Microsoft decided not to bring it because their initial estimation porting WCF into.NET core would take three years. (source : DotNetRocks Podcast)
Microsoft is recommending to use gRPC as an alternative which is a modern open-source high-performance RPC framework that can run in any environment. However, unlike WCF, gRPC cannot be hosted in IIS as of today because of HTTP/2 implementation of Http.Sys does not support HTTP response trailing headers which gRPC relies on.
Workflow Foundation is not getting ported into .Net Core. Every enterprise application will have some workflow or BPM tools integrated with it. If you used WWF in your application, Microsoft is recommending to look at the unofficial fork of WF runtime for porting into .Net Core.
Microsoft is bringing Windows Desktop Packs (winforms , WPF and UWP) to support desktop applications which only works on windows. I wouldn’t expect anyone to use winforms for any new development however this will help to port legacy winforms applications into .NET 5. This doesn’t mean that .Net core architecture is changing. It will still be cross platform framework but when you are adding desktop packs, it target to work only on windows. However, porting existing windows desktop application into .Net core will give the additional benefit of Core Runtime and API performance improvements and deployment flexibility.
The latest version of C# 8.0 introduced a lot of new language features including Async Stream, Ranges, Nullable Reference Types, and Pattern Matching, etc.. However, they are going to be available only on .Net Core 3.0 and above which means it is not coming to Legacy .Net Framework, or .Net Core 2.2 , 2.1, 1.0. It clearly indicates that “.Net Framework is dead and .Net Core is the future”.
If you are planning to port .Net Framework application into .Net Core, you will have to analyze your APIs used in your project to see what is compatible or not. .Net Portability Analyzer is a tool that helps to analyze and determine how flexible your application is across .NET platforms.
As a .Net Developer, I am happy to see the future of .Net and the direction it goes. I no longer need to learn Javascript for SPA framework because Blazor will do that. I no longer need to learn python for machine learning because ML.NET will do that. I no longer need to learn android/swift because Xamarin will do that. If you know c#, now you can develop an application that can run anywhere from IOT to Cloud. However, this change is going to affect a lot of enterprise customers who have the product/framework based on WCF and Web Forms.
]]>SignalR Streaming is a latest addition to SignalR library and it supports sending fragments of data to clients as soon as it becomes available instead of waiting for all the data to become available. In this article, we will build a small app for baby monitoring to stream camera content from Raspberry PI using SignalR streaming. This tool also sends the notification to connected clients whenever it detects baby cry using Cognitive Vision Service.
This tool consists of following modules.
Azure based cognitive Vision Service will take the image input and detect if any human face exists and then analyze the face attributes and sends the response back with face attributes values such smile, sadness, anger etc..
SignalR Client is a Javascript based chrome extension runs in chrome browser background. When SignalR Hub sends the notification messages, this will show the popup notification to the user. User will also have the option to view the live streaming from client Popup Window.
PiMonitRHub is streaming hub which holds streaming methods startstream and stopstream. When the SignalR client invokes the startstream method, it calls the camera service to capture the photo and send it to client by writing into channelwriter. Whenever an object is written to the ChannelWriter that object is immediately sent to the client. At the end, the ChannelWriter is completed to tell the client the stream is closed by writer.TryComplete Method.
public class PiMonitRHub : Hub |
PiMonitRWorker is a worker service inheriting from background service. It starts the new thread whenever application is started and execute the logic inside the ExecuteAsync method in frequent interval until cancellationtoken is requested.
internal class PiMonitRWorker : BackgroundService |
In this worker service, it capture the photo using camera service and sends it to cognitive service API to detect the baby cry. If the baby cry is detected, notification hub method will broadcast the notification message to all connected clients. If the client is already watching the stream, this background service will not detect the baby cry until the user stopped watching the stream to avoid duplicate notification to the users.
Microsoft Cognitive Service API is very powerful API to provides the power of AI in few lines of code. There are various Cognitive Service APIs are available. In this app, I will be using the Cognitive Vision API to detect the face emotion to see if the baby is crying or not. This API will analyze the given photo to detect, recognize the human face and analyze the emotion face attributes such smile, sadness etc.. Best of all, this service have free tier which allows 20 calls per minute so we can get started without paying for anything.
After you register the cognitive service in Azure Portal, you will get the API end point and the Keys from the portal.
You can store the Keys and EndPointURL in to UserSecrets / AppSettings / Azure Key Vault so that we can access it from configuration API.
public class FaceClientCognitiveService |
I am running my Rasperry PI with Raspian OS which is based on Linux ARM architecture. The camera module has built in command line tool called raspistill
to take the picture. however, i wanted to use some C# wrapper library to capture picture from PI and found out this wonderful open source project called MMALSharp which is an Unofficial C# API for the Raspberry Pi camera and it supports Mono 4.x and .NET Standard 2.0.
I installed the nuget package of MMALSharp and initiated the singleton object in the constructor so that it can be reused while streaming the continuous shots of pictures. I have also set the resolution to 640 * 480 for the picture because the default resolution is very high and file size is huge as well.
public class PiCameraService |
Now, that we are done with server side app coding for, our next step is to deploy it into Raspberry PI. In order to publish the app into PI, there are two different ways to publish it.
I used to self containment deploy so that all the dependencies are part of the deployment. The following publish command will generate the final output with all the dependenices.
dotnet publish -r linux-arm |
You will find the final output in the linux-arm/publish folder under bin folder. I used Network file sharing to copy files into raspberry PI.
After all the files are copied, i connected my raspberry PI through remote connection and run the app with the following command in the terminal.
I decided to go with chrome extension as my signalR client because it supports real time notification and also it doesn’t need any server to host the app. In this client app, i have background script which will initialize signalR connection with hub and runs in background to receive any notification from hub. It also has Popup window which will have start and stop streaming button to invoke the streaming and view the streaming output.
manifest.json will define the background scripts, icons and permissions that are needed for this extension.
{ |
// The following sample code uses modern ECMAScript 6 features |
background.js will initiate the signalR connection with hub with the URL defined. We also need signalr.js in the same folder. In order to get the signalr.js file, we need to install signalr npm package and copy the signalr.js from node_modules\@aspnet\signalr\dist\browser folder.
npm install @aspnet/signalr
This background script will keep our signalR client active and when it receives the notification from hub, it will show as chrome notification like below.
|
popup html will show the stream content when the start streaming button is clicked. it will complete the stream when the stop streaming button is clicked.
var __awaiter = chrome.extension.getBackgroundPage().__awaiter; |
When the user clicks the start streaming button, it will invoke the stream hub method (StartStream) and subscribe to it. Whenever hub sends the data, it receives the content and setting that value directly to Image src attribute.
streamContent.src = "data:image/jpg;base64," + item;
when the user clicks the stop streaming button, client invoke the StopStream hub method which will set the _isStreamRunning Property to false which will complete the stream.
This is a fun project, i wanted to experiment with signalR streaming and it worked as i expected. Soon, We are going to have lot more new stuffs coming in SignalR (IAsyncEnumerable) which will make even better for many other real time scenarios. I have uploaded the source code in my github repository.
Happy Coding.
i am a huge fan of signalR. Today, David Fowler, the creator of signalR mentioned my tweet in his timeline and i am so happy for that. This made my day. :)
]]>To Setup the hexo blog framework in your machine, you have to first install the latest nodejs and Git on your machine. Run the following command to install the hexo framework.
npm install -g hexo-cli |
After the hexo installed on your machine, run the following command to initialize the hexo.
$ hexo init <folder> |
After the installation is done, the basic project folder will be created with the following structure.
. |
_config.yml will hold all the configuration related data for your blog. You can modify the blog title, description , keywords etc.. This also holds the details the current theme, additional plugin details installed in your website.
There are plenty of themes available for Hexo and i decided to go with one of most popular theme called Next. I also liked icarus theme but since it has three column layout, i decided to go with Next theme. It has four different theme layout to customize and also light and dark mode for code blocks. To install the theme, just navigate to the site root directory and clone the git repository using the following command.
$ git clone https://github.com/theme-next/hexo-theme-next themes/next |
Once the theme is installed, change the theme name to Next in _config.yml in the site root directory.
You can customize the theme settings by changing the _config.yml inside the themes\next directory. All the customization details are available in https://theme-next.org/docs/theme-settings/.
I have installed the hexo sitemap plugin to generate the sitemap.xml file automatically. There are lot other good plugins available in the official site.
To create a new post or a new page, you can run the following command:
$ hexo new [layout] <title> |
post
is the default layout
, but you can put page
if you want to create a new page. You can also change the default layout by editing the default_layout
setting in _config.yml
.
Once the new post is created, markdown (.md) file will be created under source\_posts folder with the default front matter. you can use any markdown editor to start creating articles. Here are some of the online and offline markdown editor which are most popular among users.
After we create the article, we can generate the static html by running the following command.
hexo generate |
This command will generate the static html files and all the necessary supporting files (javascript, css) in the public folder. We can run it locally to verify the page by running the following command.
hexo server |
This will run the nodejs server in default 4000 port. You can verify your blog by running http://localhost:4000 in the browser.
We have successfully created our website in local, it is time to deploy our website into Netlify hosting. I choose Netlify instead of github pages because it supports CI/CD pipeline to deploy the site automatically whenever i commit the changes into my github repository. Also, Netlify free account features are sufficient for running any personal blog.
Signing up with Netlify is pretty straightforward. You just have to link with your github account to sign up the account.
After you signup with Netlify, create a new repository in github to push the code base into github. Once the repository is created, copy the remote repository URL to setup the remote origin branch from local.
Navigate the local root directory and Initialize as a Git repository.
git init |
Add all the files from your local repository and commit it locally
git add . |
Now, setup the remote origin with the following command.
git remote add origin Git_Repository_URL |
Now, you can push your local changes to git repository with the following command.
git push -u origin master |
Now. login to Netlify and click create new site by clicking New Site from Git button.
Link your github repository from the next page for continuous deployment.
You have to authorize Netlify to access your github repository. After you authorize it, you have to choose our website repository and the branch. As part of last step in the wizard, netlify automatically identifies hexo blog and put the hexo generate build command and the default publish directory.
Click Deploy site to finish creating automatic deployment setup.
This will start deployment process and publish the blog in netlify with some random subdomain URL. You can change the random name with meaningful subdomain name in Netlify account if needed.
Every time when we push the changes to github repository, it triggers the build automatically to deploy it. Very Cool!!
We have now successfully completed setting up the static web site using hexo and netlify, its time to set up our custom domain for our blog. I purchased my domain with google domains and it costs 12$ per year. You can go with any domain service that works well for you.
In my google domain account, i have to create A record with Netlify load balance IP and CNAME record with Netlify alias name. That’s it.
We are done with all the steps and my blog is now ready to serve.
This articles covers the basics of how to create a personal blog site using hexo static html generator framework and deploy it to netlify automatically using continuous deployment. I created this blog entry using typora editor and its so easy to create article with no hassle. Overall, i am happy with the decision of moving my blog from blogger to hexo. What are your thoughts?
]]>This will be one of my series of multiple blog posts to explore some of the hidden gems of C# features. Hidden gems are surprisingly useful feature but not being used much by common developers.
From version 7.0, C# introduced the new feature called discards to create dummy variable defined by underscore character _. Discards are equal to unassigned variables. The purpose of this feature is to use this variable when you want to intentionally skip the value by not creating variable explicitly.
For example, if you are calling the method and it returns the object but caller is only interested in calling the method but not interested in the return object. In such case, we can use discards variable so that it can reduce in terms of memory allocation and make your code clear as well.
Every developer would have come across the scenarios like checking the given string is valid datetime object or not by using tryparse method. However, tryparse method expects the out parameter to produce the datetime result in addtion to returing the boolean result so we must declare datetime result variable to use it in out parameter even if we dont use it. This would be ideal situation to use discards variable if we are not going to use the result object.
DateTime result; |
In the example above, we never used result object. we are just checking the given string is valid datetime or not.
if (DateTime.TryParse("02/29/2019", out _)) |
With discards variable, we can completely ignore the result variable.
If we want to ignore the return result and interested in actual result object only, we can do the above example as below._ = DateTime.TryParse("02/29/2019", out result);
Discards variable introduced in C# 7. So, it will work only on version 7 and above.
If you have value tuple that expects multiple values and you are interested in one or two values, you can use discards without creating other variables. For example, var (a, _, _) = (1, 2, 3)
The discards in C# provides a way to ignore local variables if not used instead of creating it. I think this is a very nice hidden feature of c# that people may not be using it very often. I will keep sharing more of hidden gems in my upcoming blogs. If you have something to share, please do post it in comments section.
]]>In this article, i will explain how to containerize a simple N-Tier CRUD MVC application using docker. We will create a separate app server and database server container images and deploy and run the simple N-Tier MVC application. If you are new to docker, i would recommend first to read Sahil Malik article about docker for developers and watch the awesome pluralsight course of Modernizing .NET Apps with Docker by Elton Stoneman.
I took the N-Tier Application on ASP.NET MVC - A Complete Solution from MSDN Code web site that runs on full .Net Framework. This sample application does the basic CRUD operation for maintaining Employees data using Model-View-Controller Patter with Repository Pattern and N-Tiers Deployment Architecture Pattern. We will modernize this application by containerizing into docker image. This application will have separate database and application server instances. The database server will be based Docker version of SQL Server Developer Edition and application server is based on microsoft/aspnet:latest docker image. Every time, when new container instance is created, new database will be created and all the data that were created in prior container instance will be destroyed when container instance is stopped which perfectly works for automated testing scenarios.
Now, i am going to explain about docker-compose file to orchestrate how to build and deploy .Net Framework application into docker container. Visual Studio provides the default container orchestration support for .Net Web Projects. You can add it by right click on the web project and select Container Orchestration Support as below.
However, i am not using the built-in container orchestration support feature for creating docker-compose file. I created it manually from scratch using visual studio code editor.
In the root folder of the project, create a new file called docker-compose.yml with the below code. I used visual studio code as my editor and it has great support for yaml file with intellisense.version: '3'
services:
docker_ntierdemo_app:
image: jeevasubburaj/dockerntierdemo_app:v1
build:
context: ./NtierMvc/bin/Release/Publish
depends_on:
- docker_ntierdemo_db
hostname: ${APP_UUID}
container_name: ${APP_UUID}
networks:
docker_ntierdemo-net:
ipv4_address: 172.16.238.20
docker_ntierdemo_db:
image: jeevasubburaj/dockerntierdemo_db:v1
build:
context: ./Database
ports:
- "14333:1433"
env_file: db_dev.env
hostname: ${DB_UUID}
container_name: ${DB_UUID}
networks:
docker_ntierdemo-net:
ipv4_address: 172.16.238.21
networks:
docker_ntierdemo-net:
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
Lets talk about each line in the above docker-compose file to understand what is going on. Before we take a deep dive into that, i would recommend you to read the official docker-compose guide from docker website.
version: '3'
This is the version of the docker-compose format that we use in this example.
services: docker_ntierdemo_app: .... docker_ntierdemo_db:
Services definition contains configuration applied to each container started for that service. In our example, we will be creating application and db server services.
Before we go into services in detail, we will discuss about how to create environment variables in docker-compose using .env file and custom env file. we are going to create some custom environment variable such as hostname, sql server login password etc to access it from docker-compose file.
by default, you can set your environment variables using a .env file which docker-compose automatically looks for. if you want to create a custom environment file, you can also do that and reference that file inside the docker-compose file. In this example, i used both. In addition to that, you can also create the environment variable inside the docker-compose file without creating environment file.
APP_UUID=Demo_App_ServerDB_UUID=Demo_Db_Server
i have created the custom host name for both app and db server and i will be using these variables inside the docker-compose file. The same value is also configured in web.config so that app server will be connecting to db server.
SA_PASSWORD=P@ssw0rdACCEPT_EULA=Y
In this custom environment file, i have defined the default sa account password and accept EULA flag for the sql server to start inside the container.
image: jeevasubburaj/dockerntierdemo_db:v1 build: context: ./Database ports: - "14333:1433" env_file: db_dev.env hostname: ${DB_UUID} container_name: ${DB_UUID}
In the first line, i defined the name of the image with version number.
Before we jump into build section, let us look at other references in that section. I mapped the default sql port 1433 from container into 14333 on host port using ports configuration so that you can connect the database from your host server with servername as localhost,14333. This step is optional only.
we have also defined the hostname and container_name using environment variable. This will be needed to configure the database server name in our web.config, before we deploy the application in to the container.
Build configurations are applied at docker build time. The context configuration defines the path to a directory containing the DockerFile. I created a new folder Database and placed the DockerFile and Database_Setup.sql file and pointing the context to that folder. When we build the docker image using docker-compose, it runs the DockerFile inside the Database Folder and build the database image. By Default, it will look for the file with name of DockerFile. if you want to create a custom DockerFile Name, you have to add dockerfile configuration to specify the custom docker file name.
FROM microsoft/mssql-server-windows-developer:latestCOPY ./Database_Setup.sql .RUN sqlcmd -i Database_Setup.sql
This dockerfile gets the base image from sql server developer edition and copy the Database_Setup.sql into the image and execute the sql using sqlcmd command which will create the database and the tables defined in the sql file.
USE [master] |
networks: |
In the networks configuration section, we can define any custom network properties that are needed. if we don’t define any networks configuration, docker will create a default network with bridge mode enabled. In the above example, i created custom network with default subnet range so that i can configure the custom ip address for my app and db server. This will be useful for scenarios like when you have some enterprise application with licensing tool installed based on certain device parameter such as mac address, ip address so that you will have the container instances created with same ip address, mac address every time it created with out installing the license for every instance.
docker_ntierdemo_app: |
In the App Server Services Configuration, we define the name of the image and in the build context, configure the published folder output path. We will create a publish profile from visual studio to deploy the build output in the above mentioned folder along with the DockerFile. The DockerFile must be added in the project and set the build action as content so that it will also get deployed to publish folder.
FROM microsoft/aspnet:latestCOPY . /inetpub/wwwroot/
In this dockerfile, we are taking the base image of microsoft aspnet docker image and copy the build output directly into wwwroot folder inside the container image. we can also put the build output into different folder and create IIS web site using powershell command.
depends_on configuration defines the dependency between services. In this example, app server is dependent on database server so when we run the service , docker will start the database service first and then it will start the app service based on the order we defined.
We are now done with the orchestration configuration of deploying our application into docker container using docker compose, we can now build the image and bring up the container instances to test it. Before we start, we must create the publish profile to deploy the build output into publish folder. Make sure dockerfile in the web project has build action as content.
Also, change the database server name matching with db server name defined in env file in web.config file.
Launch the powershell window from the root folder and run the docker images command to show the list of images. I have already downloaded aspnet and sql server images from docker hub.
Lets build the docker image using docker-compose build command. This will first create the database image using base sql server developer edison and create the database and tables based on the SQL we provided and then it will create app server based on aspnet framework docker image and copy the build output from publish folder and put it into wwwroot folder inside the container image.
Now, that we have successfully created the docker images, we can verify that by running docker images command.
Let us now bring up new container instance from our image using docker-compose up command. This command will create a database container instance first and then app server instance and attach it with the database server. Once the container instances are we can verify the instance by testing our application from the browser.
Verify the application by launching the browser and put the ip address of app server container instance.
Now, home page is up and running, let try adding new employee into our table.
Lets also verify the data in sql server by connecting with localhost:14333 port from host.
Great. If we stop the container now, all the data that we created will be gone and it will start from clean slate for next instance. Let us test that by running docker-compose down command. you can also verify if all the running instances are down by running docker ps command.
If we create a new instance now, it will start from clean slate and the employee record that we created should not be exists.
Let us run docker-compose up command to bring up the new instance.
We have successfully deployed the complete .N-Tier CRUD MVC application into docker container. As i mentioned earlier, we can use the containerization for automated end to end or security testing for monolithic application. We can also integrate with CI / CD pipeline to run all the test scenarios before merging the pull request from the feature branch.
In the above example, we did not store the state changes as part of the container instances. All the changes are gone when the container instance is stopped. However, if we want to store the state of the application and database changes, docker provides the functionality of creating volumes which will mount the folder from host to docker container so that all the state changes will be persisted. This will be useful in the scenario like automated testing to store the results.
In order to create volume in docker, we should use volumes configuration section in docker-compose file. In the example below, i created the directory called DB on my host server and put the MDF and LDF database file inside the folder and then mount that folder to container.
volumes: - ./DB/:c:\db
Next step is to attach the database instead of creating database by adding attach_dbs command in env file. This will create a database called NtierMvcDB and attach the existing MDF and LDF file into that every time when the container instance is created. Also, this will store all the DB state changes even after the container is stopped. When we initiate the new container instance, it will show the data from the previous instance as well.
SA_PASSWORD=P@ssw0rdACCEPT_EULA=Yattach_dbs=[{'dbName':'NtierMvcDB','dbFiles':['C:\\\\DB\\\\NtierMvcDB.mdf','C:\\\\DB\\\\NtierMvcDB.ldf']}]
Some of monolithic core application engine may run on windows service. The good thing with docker on windows, it supports windows service since there is no GUI involved. If you want to install your application engine windows service as part of docker image build and run the windows service, use the below powershell commands in DockerFile.
RUN powershell new-service -Name "AppEngineService" -StartupType Automatic -BinaryPathName "C:\app\bin\AppEngineService.exe"RUN powershell start-service -Name "AppEngineService"
I hope this article helps you understand how to containerize .net framework monolithic application. Docker containerization is not just only for breaking monolithic application into micro service architecture. It can also be considered to modernize monolithic application packaging into docker image and ship it very frequently for various scenarios like automated end to end testing, security testing.
I have uploaded the entire source code in my github repository.
Happy Coding!!
]]>In this article, I will discuss about how to show real time cricket score notification from chrome extension using serverless Azure Functions and Azure SignalR. I have used cricapi.com free API service to get the live cricket score updates. The purpose of this article is to show the power of serverless architecture using azure functions and broadcasting to connected clients in realtime using Azure SignalR. The demo source code I attached with this article is for personal educational purpose only and not production use.
As a first step, to consume the API Service from cricapi.com, register the account with the details to get the API Key. They allow 100 free hits per day for testing purpose.
Log into your Azure Portal (https://portal.azure.com/) and create a new resource of type SignalR Service. After the service is created, copy the connection string from the Keys section.
Prerequisites
Launch the Visual Studio and Create a New Azure Function Project
Select the Azure Function v2 Preview and the Http trigger template.
For this demo, we will be creating two azure functions.
NegotiateFunction (HttpTrigger)
This function will get the JWT token for the client so that SignalR client can connect to Azure Signalr Service Hub.
BroadcastFunction (TimerTrigger)
This function runs every 1 min (configurable) and call the CricAPI Service to get the latest score for defined match id and broadcast it to all connected clients.
In order to use Azure SignalR Service in Azure Functions, I have used Anthony Chu“AzureAdvocates.WebJobs.Extensions.SignalRService” library.
public static class NegotiateFunction |
public static class BroadcastFunction |
We have to create Appsettings Key called AzureSignalRConnectionString in order to connect to Azure SignalR Service from our Azure Functions. We will have to add the settings in local.settings.json for local testing and add it into Application Settings in Azure after we deploy it.
{ |
We are now done with the coding for the Azure functions, we can start testing it locally first before deploy into Azure Portal. In the local.settings.json, we have defined the localhttpport 7071 and allowed cross domains request by putting CORS : *
Run the Application by pressing F5 which will create the host and deploy the functions in localhost.
As you see above, Azure Functions are now hosted in local, we can run the negotiate function using the following URL which will return the JWT Token to connect to SignalR Service.
Now that it worked in localhost, we can deploy the Azure Functions into Azure Portal.
In Visual Studio, Right click on the solution and Select the Publish option from the Menu.
Check the Run from ZIP checkbox and click the Publish button.
Click on the Create button to create azure hosting plan, storage account under your azure subscription. After the account is created, by clicking the publish button any time will ship the files into portal and deploy the Azure Functions.
You can login to Azure Portal to check the deployed Azure Functions.
We also need to add the AzureSignalRConnectionString key in Application Settings.
We have completed publishing Azure Functions in the Portal. Let us now create a chrome extension signalr client to receive the cricket score in real time. The timer trigger broadcast function will run every minute and push the cricket score to all connected clients.
SignalRClient.jsconst apiBaseUrl = 'https://azurefunctionscricketscore20180911095957.azurewebsites.net';
const hubName = 'broadcasthub';
getConnectionInfo().then(info => {
const options = {
accessTokenFactory: () => info.accessKey
};
const connection = new signalR.HubConnectionBuilder()
.withUrl(info.endpoint, options)
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('broadcastData', (message) => {
new Notification(message, {
icon: '48.png',
body: message
});
});
connection.onclose(() => console.log('disconnected'));
console.log('connecting...');
connection.start()
.then(() => console.log('connected!'))
.catch(console.error);
}).catch(alert);
function getConnectionInfo() {
return axios.post(`${apiBaseUrl}/api/negotiate`)
.then(resp => resp.data);
}
Manifest.json
In the manifest.json, we defined the list of scripts to load (axios, signalr and signalrclient).{
"name": "Real Time Cricket Score Demo",
"version": "1.0",
"description":
"Real time Cricket Score Update from Serverless Azure Functions pop up on the desktop.",
"icons": {"16": "16.png", "48": "48.png", "128": "128.png"},
"permissions": [
"background",
"tabs",
"notifications",
"http://*/*",
"https://*/*"
],
"background": {
"persistent": true,
"scripts": ["axios.min.js","signalr.js","signalrclient.js"] },
"manifest_version": 2,
"web_accessible_resources": [
"48.png"
]
}
To install the chrome extension in your local machine, launch chrome and open the extensions from the menu. Click the load unpacked extension and select the folder that chrome extension is placed.
After the extension is installed, broadcast azure function will execute based on the schedule and broadcast the latest score to the new connected client as below.
With the few lines of code, we have created the serverless azure functions, which will fetch the data from API endpoint and broadcast the messages to all connected clients in real time using Azure SignalR. In this article, I have hard coded the API key in the program but ideally, it should be stored in Azure Key Vault and read it from there. I hope this article helps you get started with Azure Functions. I have uploaded the entire source code in my github repository.
Happy Coding!
]]>Real-time technologies are now part of every modern applications and SignalR is the most popular .net library to create real time scenarios. Recently Microsoft announced the public preview of Azure SignalR, a cloud based fully managed service to build real time application without worrying about capacity provisioning, scaling, or persistent connections. In this article, we are going to discuss about how to create .Net Core SignalR server console app to broadcast the messages to all the connected clients in real-time without using Asp.net Core SignalR Web App.
In the enterprise world, SignalR applications often comes with high volume data flows and large number of concurrent connections between app and client. To handle that scenario, we have to setup the web farms with sticky sessions and a backplane like Redis to make sure messages are distributed to the right client. If we use Azure SignalR service, it will handle all those issues and we can focus only on business logic.
In addition to that, Azure SignalR Service works well with existing Asp.net Core SignalR Hub with very less changes. We have to add reference to Azure SignalR SDK and configure the Azure connection string in the application and then adding few lines of code services.AddSignalR().AddAzureSignalR() and app.UseAzureSignalR in Startup.cs.
Existing Asp.net Core SignalR client app works with Azure SignalR Service without any modification in the code. You can refer my early article about “How to build real time communication with cross platform devices using Azure SignalR Service” for more details.
As of today, if you want to implement duplex communication between SignalR client and server using Azure SignalR Service, you must need ASP.net Core SignalR Server Hub(Web App). However, If you just want to push the messages from server to clients (oneway), you can use Azure SignalR Service without having Asp.net Core SignalR Hub (Web App).
In the diagram above, we have two endpoints called Server Endpoint and Client End Point. With those End Points, SignalR Server and Client can connect to Azure SignalR Service without the need of Asp.net Core Web App.
Azure SignalR Service exposed set of REST APIs to send messages to all clients from anywhere using any programming language or any REST client such as Postman. The Server REST API swagger documentation is in the following link.
https://github.com/Azure/azure-signalr/blob/dev/docs/swagger.json
REST APIs are only exposed on port 5002. In each HTTP request, an authorization header with a JSON Web Token (JWT) is required to authenticate with Azure SignalR Service. You should use the AccessKey in Azure SignalR Service instance’s connection string to sign the generated JWT token.
Rest API URL
POST _https://<service_endpoint>:5002/api/v1-preview/hub/<hub_name>_
The body of the request is a JSON object with two properties:
Target: The target method you want to call in clients.
Arguments: an array of arguments you want to send to clients.
The API service authenticates REST call using JWT token, when you are generating the JWT token, use the access key in SignalR service connection string as a Secret Key and put it in authentication header.
_https://<service_endpoint>:5001/client/?hub=
Clients also connect to Azure SignalR service using JWT token same as described above and each client will use some unique user id and the Client Endpoint URL to generate the token.
With all the details above, let us build a simple .Net Core Console App to broadcast messages using Azure SignalR Service.
In this demo, we will see how the SignalR Console App server connect to Azure SignalR Service with REST API call to broadcast the messages to all connected console app clients in real time.
We will be creating following three projects.
This class library holds the logic to generate the JWT token based on the access key from Azure Connection string. It also holds the method to parse the Azure SignalR Connection String to get the endpoint and access key.
Nuget Packages Required
public class ServiceUtils |
This is the .net core SignalR Server console app to broadcast the messages via REST API call.
Nuget Packages Required
Steps
dotnet user-secrets set key value
var configuration = new ConfigurationBuilder() |
amespace AzureSignalRConsoleApp.Server |
This is the .net core SignalR Client console app to receive the messages from Azure SignalR Service.
Nuget Packages Required
Steps
In order to load the configuration object from User Secrets to load the Azure Connection String, we must follow the same steps as above.
var configuration = new ConfigurationBuilder() |
Generate the JWT access token using client endpoint URL and create hub connection with the client hub URL and the access token to establish the connection with Azure SignalR Service. In the Hub connection ON event, wire up with the same Target method to receive the message.
namespace AzureSignalRConsoleApp.Client |
Now that, we have completed the code, let us run the application to see the demo. First, launch the server and then launch more than one client app in multiple command window. After that, start typing in server command window to send messages to all the clients in real-time.
In this article, we discussed how to use Azure SignalR Service in .net core console app without using Asp.net Core Web App. In Real World, Azure SignalR Service can be integrated with other Azure services like serverless computing (Azure Functions) to push notification messages to all the connected clients in real time based on some trigger without hosting Asp.net Core Web App and managing the connection with clients. I have uploaded the entire source code in my GitHub repository.
Happy Coding!
]]>In Asp.Net Core , Configuration API was introduced to access the key value pair data from various sources in the order they are configured which allows you to access the configuration keys using Configuration Class regardless of where the keys are stored. If the same key value pair is stored in more than one places, the latest in order of precedence will overwrite the other values. Let’s dive into the demo to see how it works.
By default, ASP.Net Core version 2.0, the configuration providers order of precedence is hidden in Program.cs behind the method call CreateDefaultBuilder(). If you look at source code from github, you will see that it loads the data in following order.
In Addition to that, you can also create custom providers by implementing IConfigurationSource interface and add it into the processing pipeline. Asp.net core also supports reading from xml file in addition to json file.
In this Demo, we will see how to read and display the configuration key that are available from various sources and what value is displaying based on order of precedence. Launch the visual studio and create a new Asp.net Core Empty Web Application.
After the empty project is created, add the AppSettings.json item into the project.
Add the new key value pair item called DemoConfigKey in the AppSettings.json and set the value as “Value from AppSettings.json”
In order to access the value from configuration object we need to use Startup constructor as shown below.
After we added the Startup constructor with the parameter of type IConfiguration, we can store the injected IConfiguration object into local variable and then use it anywhere within ConfigureServices() and Configure() and also we can access it from other controller via dependency injection passing the IConfiguration object in Controller Constructor.
In order to print the config value, I have just modified the default Configure method like below. This will print the value reading from configuration sources defined in the order.
When I run the application, you will see the result below reading from AppSettings.json.
This file is used to override the keys in appsettings.json with deployment environment specific settings. For Example a file named appsettings.production.json would contain values specific to production.
By Default, Asp.net Core Environment is Development. You can modify it in Visual Studio by changing it in Project Properties if needed.
Let’s add the environment specific AppSettings.Development.json file into the project.
Add the value for DemoConfigKey as “Value from AppSettings.Development.json”
Let’s run the application and see the result. We should get the value from AppSettings.Development.json by overwriting the value from AppSettings.json based on the order of precedence.
The User Secrets file is a JSON file stored on the local developer machine. This file is unencrypted and stored outside of the solution directory (under user profile directory) and, therefore, is not checked into source control by accident. The user secrets file is used only for local development overrides like connecting to a local database server or development server API key values etc. These configuration values are only relevant to the local developer and any other developer / machine cannot access those values.
To add the UserSecrects.json file, right click on the project file and select Manage User Secrets.
Add the Same key and set the different value as below.
Lets run the application and it should override all the other key values and put the value from Secrets.json
Environment variables used mostly in container-based solutions like Docker Compose and Kubernetes. Docker allows environment variables to be configured in a Docker file using the ENV instruction. For example:
ENV DemoConfigKey: Value from Docker Environment Variable
See the Docker reference documentation for more details.
Command line arguments allow you to modify the configuration keys when running your application without modifying any files using the command line syntax of key=value.
In this demo, we run the application in command line window as below.
When running the application, you will see the result as below
Asp.net Core allows you to have same configuration key value pair in multiple places and allowing us to write the same code to access those values with the order of precedence regardless of it’s source. It also allows you to have multiple config file with different set of keys and load them all in one configuration object. It is very useful when you want to split the configuration settings into different file by module (Eg: All Database Related Key Value Pairs in one file) or any other category that suitable for your application. For the Cloud based solution, you should also consider using Azure Key vault for storing your sensitive secrets data.
As mentioned above, the default order of precedence was hidden behind the method CreateDefaultBuilder(args) . If you want to create a custom configuration provider, you can do that by implementing IConfigurationSource Interface. I hope this article helps you to understand how configuration data is getting accessed at runtime from various sources.
Happy Coding!
]]>This article provides an overview of architecture of Magic Paste tool and how to setup the SignalR Service in Azure SignalR Portal. It also provides the overview of how to develop SignalR Client Web , Android and Windows Forms App to communicate each other in real time using Azure SignalR Service.
Microsoft recently released the preview version Azure SignalR service which is a fully-managed service that allows developers to focus on building real-time web experiences without worrying about capacity provisioning, reliable connections, scaling, encryption or authentication. In this example, I setup the Azure SignalR Service with the Free Tier which allows the maximum of single unit with 100 connections. This would be suffcient for this demo.
We will develop the SignalR client applications to connect to Azure SignalR Service to do real time communication between cross platform devices. We are going to develop the following applications.
This is an Asp.net Core Web Application with layout defined using bootstrap library. In the landing page, we will be adding DIV container to show all the incoming messages from SignalR Hub and in the bottom of the page, we will place the text box and button to publish the text message to other clients. We will use the SignalR Javascript Client library to connect to SignalR Hub.
npm init -y |
We will create a new hub called MagicPasteHub that internally connects to Azure Service. Right click on the Project in Solution Explorer and Create a New Folder called Hub and then add a new file called MagicPasteHub.cs and paste the following code.
public class MagicPasteHub : Hub |
In the Startup.cs, add the following code in ConfigureServices method. SignalREndPoint Key will hold the Azure Endpoint Value. During Development time, you can store the Azure Endpoint Values in UserSecrets.json. The method AddAzureSignalR will establish the link between Web App and Azure SignalR library using the EndPoint URL.
services.AddSignalR().AddAzureSignalR( |
Add the following code in Configure Method to map the AzureSignalRService with MagicPasteHub Route URL
app.UseAzureSignalR(routes => |
In the site.js, add the following code to setup the signalR client to connect to hub and wire up the button click event to send the message and wire up the ReceiveData event to append the incoming data into DIV container.
const connection = new signalR.HubConnectionBuilder() |
The entire source of web client app is uploaded here in github.
This is a full .net framework based windows forms app that will be running in the system tray with the registered global hotkey CTRL + SHIFT + C. Whenever hot key is pressed anywhere in the desktop, app will check and publish the content of the clipboard to other clients. I used NHotKey open source library for registering the hotkey in Windows Forms.
private void MainForm_Load(object sender, EventArgs e) |
Add the new File called AzureSignalRClient.cs. This is the wrapper class to to put all the Azure SignalR Client related code.
public class AzureSignalRClient |
This is a xamarin based android app which connects to SignalR hub to receive the messages and show as notification to the user and then when the user is clicking the notification, it opens the application with the list of content shared so far in the list view.
Launch the visual studio and select the Android -> Android App (Xamarin) from the Menu.
From the Solution Explorer, Select Manage NuGet Packages and Select Microsoft.AspNetCore.SignalR.Client Library and Install it.
public class MainActivity : ListActivity |
SignalR services are mainly used for Apps with Real-Time technologies with high-frequency data flows and large quantities of concurrent connections between the client and server. Now with Azure SignalR Service, it allows you to use ASP.NET Core SignalR to build real-time experiences such as chat, live dashboards, colloboration editor and more, all without worrying about capacity provisioning, scaling, or persistent connections. This article explains the basic idea of how to use Azure SignalR Service However, for the real world implementation, we need to look at the fact of application performance and battery optimization for mobile apps before implementing SignalR services.
I have uploaded the entire source code of Web, Android and WinForms in my github repository.
Happy Coding.
]]>This is a simple chat application for user to login with name and choice of language to send and receive the message. When the user sends the message, signalR hub receives the message and send the API request to Azure Cognitive Translate API Library and receives the translated text and send it back to user in real time. SignalR Hub creates the group of users based on language so that when the translated text comes back from API, it broadcast back to the group based on the language.
Video Demo
Realtime Chat application is developed in .Net Core Web App using Razor Pages with Bootstrap layout and .Net Core SignalR Library .
When the user login with the name and choice of language they selected, system is establishing the connection with SignalRHub and put the user into the selected language group. It also notifies all the other users that new user is joined.
When the user sends the message, SignalRHub makes the Web API call with Cognitive Services API to get the translated message and sending the translated message to other members in their selected languages real-time.
When the user exits the chat, it removes from the group and also notifies the other users.
Steps- Create a New Razor Web App with Login and Chat Pages and Adding SignalR Libraries
By default, the Microsoft.AspNetCore.SignalR package containing its server libraries as part of its ASP.NET Core Web Application template. However, the JavaScript client library for SignalR must be installed using npm. Use the following commands from Node Package Manager Console to install it and copy the signalr.js file from node_modules\@aspnet\signalr\dist\browser to wwwroot\lib\signalr\signalr.js. (Create a SignalR Folder under Lib Directory)
npm init -y |
public class User |
This model class holds the Name and the Language preference selected by user and the SignalR Connection ID.
public class ChatHub : Hub |
This is the main signalr hub that will communicate with all the clients and also make the API calls to Translation Library to translate the text. The constructor based dependency injection loads the TypedHttpClient object that are configured in Startup.cs to make the http API calls.
Connect Method will be called when ever the new user is connected and its adds the users into the static list (Preferably to store in concurrent dictionary to avoid multi thread locking issues) and also add to the signalr group based the launguage the user is selected. It also sends the message back to caller with the list of users so that it will be populating user panel on the UI side. It also sends the notification to all the other users that new user is joined the chat.await Groups.AddToGroupAsync(id, language);
await Clients.Caller.SendAsync("onConnected", ConnectedUsers, name, id);
await Clients.AllExcept(id).SendAsync("onNewUserConnected", name);
Disconnect Method gets called when the user exits the chats. It removes the user from the list and also notifies all the other users.ConnectedUsers.Remove(item);
await Clients.AllExcept(item.ConnectionId).SendAsync("onDisconnected", item.Name);
SendMessage Method will be called whenever user enters the message and it makes the API calls to translation service to get the translated text in each languages and sends the associated translated text to each group.
You can select the Free Tier for Development purpose. It allows upto 2Million characters to translate per month in Free Tier.
After the Cognitive Service is created, you can obtain the Subscription Keys from the Quick Start Menu.
Now that, we have completed setting up the Translator Service in Azure, we will switch back to code to consume the API Service. We will be using TypedHttpClient to consume the web service. Typed clients are custom classes with HttpClient injected in the constructor. This will be wired up within the DI system by adding generic AddHttpClient method in Startup.cs with the custom type. Another advantage of having typed client is that we can encapsulate the all the HTTP calls inside specific business methods like SendMessage, GetSupportedLanguages.
public class CognitiveServiceClient |
GetSupportedLanguages method will make the API calls to get the list of supported languages.
Translate method takes the message and list of languages to translate and return the json output translation text array all the requested languages.
In Startup.cs, make sure to add the below code in ConfigureServices method.
services.AddSignalR(); |
In the Configure method, add the following,
app.UseSignalR(routes => |
In the IndexModel.cs , we will have the implementation for GET method to return the list of supported languages.
private CognitiveServiceClient _client; |
Finally, this can be tested by running the application in multiple window with different username and languages are selected so that both window can send and receives the message in their selected language in realtime. The same application can be hosted in Azure as a web app with our local signalR hub or use the Azure SignalR service.
In future, it is very much possible to see this realtime translation feature in every apps including the most popular messaging apps like whatsapp when the group chat is having people who speak different region languages and enabling automatic translation settings would help a lot.
I have posted entire source code in my github library.
Happy Coding!!!
The below diagram depicts the architecture of Model and Service Components. In this article, i have added only for feed based service and i will be adding to support DOM based extraction later and publish it into github.
IParserModel - Interface for the model.public interface IParserModel
{
string RawContent { get; set; }
}
BaseParserModel – Base class for the Parser Mode. RawContent will holds the Raw content of feed or DOM.public class FeedBaseParserModel : BaseParserModel
{
[ ]
public List<isyndicationitem> SyndicationItems { get; set; }
}
RokuParserModel – This is the root class for Roku which holds all the properties and subclasses that are expected for Roku Streaming Box. I will be adding AndroidParserModel and IOSParserModel in future to support other streaming boxes. The RokuParserModel also have sub classes to hold other properties such video URL and ThumbnailURL. This can be customized in whatever way our streaming box expects the model to be.
[ ] |
IParserService – Base Interface for Servicepublic interface IParserService<t> where T : IParserModel
{
Task<t> ParseContent();
}
BaseParserSerivcepublic abstract class BaseParserService<t> : IParserService<t> where T : IParserModel, new()
{
public async virtual Task<t> ParseContent()
{
return await Task.FromResult(new T());
}
}
RokuFeedParserService – This is the service class for the Roku Format that holds all the core logic extract the content items from the feed. It takes the feedURL in constructor and override the parseContent method to implement Feed Based Parsing Service. I used the SyndicationFeed library from .net to parse RSS and atom feed. The base RokuFeedParserService will parse the feed and populate the list of items in SyndicationItems Property. Later, the child class that will be inhertiing from RokuFeedParserService will use the SyndicationItems values to create custom formatted the xml / json output based on the streaming format requested.public class RokuFeedParserService : BaseParserService<rokufeedparsermodel>
{
public string FeedURL { get; set; }
public RokuFeedParserService(string _feedURL)
{
FeedURL = _feedURL;
}
public async override Task<rokufeedparsermodel> ParseContent()
{
RokuFeedParserModel parserModel = new RokuFeedParserModel() { SyndicationItems = new List<isyndicationitem>() };
using (XmlReader xmlReader = XmlReader.Create(FeedURL, new XmlReaderSettings() { Async = true }))
{
var reader = new RssFeedReader(xmlReader);
while (await reader.Read())
{
switch (reader.ElementType)
{
case SyndicationElementType.Item:
parserModel.SyndicationItems.Add(await reader.ReadItem());
break;
}
}
}
return parserModel;
}
}
Ch9RokuParserService – This is the child service class for Channel9 feed and it will override the parseContent method to populate RokuFeedParserModel object based on SyndicateItems values. This will return the final output of Roku based parser model object. We will be adding additional service classes for supporting other formats here.
public class Ch9RokuParserService : RokuFeedParserService
{
public Ch9RokuParserService(string _feedURL) : base(_feedURL)
{
}
public async override Task<rokufeedparsermodel> ParseContent()
{
var parserModel = await base.ParseContent();
parserModel.ParserItems = new List<rokuparseritem>();
int currIndex = 0;
foreach(var syndicationItem in parserModel.SyndicationItems)
{
RokuParserItem parserItem = new RokuParserItem();
parserItem.Title = syndicationItem.Title;
parserItem.ContentId = currIndex;
parserItem.StreamFormat = "mp4";
parserItem.MediaItem = new RokuMediaItem();
parserItem.MediaItem.StreamUrl = syndicationItem.Links.FirstOrDefault(i => i.RelationshipType == "enclosure")?.Uri.ToString();
parserModel.ParserItems.Add(parserItem);
currIndex++;
}
parserModel.ResultLength = currIndex;
parserModel.EndIndex = currIndex;
return parserModel;
}
}
BaseAPIController – This is the base API controller and it will have the default annotationattributes for APIController and Route Actions. Note that i am using APIController Attribute that denotes a Web API controller class and it provides some useful methods and properties by coupling with ControllerBase method such as automatic 400 responses and more. I have also defined the default route [Route(“api/[controller]“)] at base class level so that i dont have to redefine this on every other controller.[ ]
[ ]
public class BaseAPIController : ControllerBase
{
public BaseAPIController()
{
}
}
Ch9Controller – This controller will have all the GET methods for various streaming boxes and produces the xml or json output. Note that, i am using HttpGet(“Roku”) so that it allows me to have multiple GET methods on single controller. you can also define your Routing by action like [HttpGet(“[action]”)] and then you can call the API with the method name like /API/Ch9/GetRokuFormatpublic class Ch9Controller : BaseAPIController
{
[ ]
[ ]
public RokuFeedParserModel GetRokuFormat()
{
var parserService = new Ch9RokuParserService("https://s.ch9.ms/Feeds/RSS");
return parserService.ParseContent().Result;
}
}
Startup.cs - In my startup class, i have enabled both XML and JSON formatter to support both format based on the request. You can also create custom formatter
if you need other than XML / Json. Note that i have enabled RespectBrowserAcceptHeader = true to support the XML output. Also, i have used the XML Annotation to change the
element name and added the XMLIgnore Attribute to ignore from serialization.public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc(options =>
{
options.RespectBrowserAcceptHeader = true;
})
//support application/xml
.AddXmlSerializerFormatters()
//support application/json
.AddJsonOptions(options =>
{
// Force Camel Case to JSON
options.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseMvcWithDefaultRoute();
}
}
Program.cs - I used the default settings to start the kestrel web server and configured to the use the 5000 port for hosted application to listening on that port. This will be used when we deploy the application Raspberry PI later.
public class Program |
Now that we have done the coding for our channel9 feed Web API to support Roku XML format, let us run the application first using IIS express to make sure it works.
Now, that our application produced the xml output as expected, we will deploy this latest .Net Core 2.1 Web API in our Raspberry PI. Please note that .Net Core runs only on Raspberry PI 2 / 3. It does not run on Pi Zero. The Raspberry PI that i have is currently running on Raspian OS.
Before we deploy our application, As a first step, we have to install the .Net Core SDK and Runtime on Raspberry PI. In order to install the SDK, we will be executing the below commands on PI terminal window. I already have remote connection enabled from my laptop for my PI. I have also enabled network share from my PI so that i can publish the code later using windows file share. If you want to know how to enable to remote connection and file sharing for your Raspberry PI , visit the daveJ article about Beginner’s Guide to Installing Node.js on a Raspberry Pi and he explained all the steps in details.
Launch the remote connection and connect to the PI Server. Launch the terminal window and run the following commands.
$ sudo apt-get -y update
$ sudo apt-get -y install libunwind8 gettext
$ wget https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.1.300-rc1-008673/dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz
$ wget https://dotnetcli.blob.core.windows.net/dotnet/aspnetcore/Runtime/2.1.0-rc1-final/aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz
$ sudo mkdir /opt/dotnet
$ sudo tar -xvf dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz -C /opt/dotnet/
$ sudo tar -xvf aspnetcore-runtime-2.1.0-rc1-final-linux-arm.tar.gz -C /opt/dotnet
$ sudo ln -s /opt/dotnet/dotnet /usr/local/bin
The first two commands are required for Raspbian for deploying .Net Core SDK and Runtime. These are some dependencies modules that has to be added manually. For more details, you can check the official documentation here.
The next two WGET commands will download the latest DotNet SDK and Runtime (2.1 RC1) and then the following commands will be used to extract the output to /opt/dotnet folder and a symbolic link created for dotnet.
If all the above steps are done with no errors, .Net Core SDK is installed on PI. Just run the command dotnet –info to display the information about DotNet SDK and Runtime details.
Now, that we have installed the .Net Core SDK and Runtime on PI, its time to build and deploy the published code on PI. As I mentioned earlier, I have the network shared drive enabled on my PI to copy the files. You can also transfer to PI via other methods like FTP.
As a first step, Lets publish the application in Linux ARM architecture since Raspbian is based on Linux. Navigate to the project folder and execute the following command to publish the output.
dotnet publish . -r linux-arm
if you want to publish in release mode, you can add -c release attribute.
The code is published in Linux-arm\Publish folder. Now, I will create a folder in PI server called VideoParserAPI and copy all the files from linux-arm\publish folder and paste into VideoParserAPI folder in PI Server.
Now the code is published in to PI, we will just the run the application to start the service listening on port 5000. Remember in my startup class, i used the Port No 5000 to listen for network connection. Remember, this is my personal project and i will be using it only in my internal network and i have no intention to publish it on internet. If you have the application that needs to be published on internet, you may have to use reverse proxy like nginx to configure port 80 and reroute to kestrel web server for best practices.
Lets run the application to start the services. Open the Terminal Window on PI Server and execute the ./VideoParserAPI to run the application. It a few seconds, the service will start and listening on Port 5000.
Lets call the web API from my system to see the output.
There you go. Web API Application developed in Latest .Net Core 2.1 is running on Raspberry PI.
I have created design pattern to handle all the core parsing logic in the base classes so that we don’t have to rewrite the logic for every other streaming boxes. however, we can customize the logic to change the output xml / json content based on the streaming box in appropriate child class.
In future, I will be developing an android app to consume these web api to play the content in my mobile with chromecast support. We can also extend this library to any other streaming box (Apple TV, Fire TV) . I also planned to deploy this web API app inside the docker in my Raspberry PI later.
I hope this will help you to get going with your crazy ideas.
The entire source code is uploaded to github.
Happy Coding !!!
]]>This article explains about how to process the stored requests from SQL table by mutiple engine simultaneously running in app farm. We need an way to process those requests by multiple engine simultaneously but if one engine is picked the request, other engine should not be picking it and at the same time, table should not be locked for reading other records.
In SQL Server, we have the concept called Table Hints which are specified in the FROM clause of the DML statement and affect only the table or view referenced in that clause. There are various types of table hints are available but we are going to look into UPDLOCK, READPAST for this scenario.
Specifies that update locks are to be taken and held until the transaction completes. UPDLOCK takes update locks for read operations only at the row-level or page-level.
Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped but page-level locks are not skipped.
So, by combining UPDLOCK, READPAST in our DML statement, it provides the locking for thes rows selected by the first engine and not to return those rows to other engine even if the request comes simultaneously.
In this example query below, we are performing the data queue operations to select 10 records at a time by each engine and once its picked, it need to update the record status to PICKED and also other engine should be able read the other records in parallel.
In SQL Server, we have OUTPUT clause which returns the information based on each row affected by an INSERT, UPDATE, DELETE, or MERGE statement. So, we will use the output clause logic to return the rows that we are updating to PICKED status and we use UPDLOCK,READPAST table hints to select the rows so that it locks those rows and it wont be available for other engine as well.
The below query will return 10 unprocessed rows for each request from engine and also updates those record status to PICKED. This also takes care of handling requests simultaneously.
UPDATE incoming_request SET status_value = 'PICK' OUTPUT INSERTED.* |
Happy Coding!
]]>In order to handle this scenario,we have to address avoiding memory exceptions and also not to read the data for each record in sql. The SQL paging concept comes in handy to address this issues by fetching rows in slice with some limit (eg: 20K rows at a time) and perform the parrellel operations in the loop.
In SQL Server 2012, Microsoft introduced OFFSET and FETCH keyword.aspx) to apply the paging in SQL query results. We will loop for every 20k records and perform the parrellel operations instead of individual records.
SELECT First Name + ' ' + Last Name FROM Employees |
This will skip the first 10 rows and return the next 5 rows.
In the real world application, i will be using it likeint startIndex = 0
int offset = 20000
while (true)
{
//Execute the SQL query to load the data by passing the startIndex and Offset.
//SELECT * FROM PERSON order by PERSON_ID OFFSET @STARTINDEX ROWS
//FETCH NEXT @OFFSET ROWS ONLY
if(no rows) break; // Break the loop since no rows to process
System.Threading.Tasks.ParallelOptions po = new System.Threading.Tasks.ParallelOptions();
po.MaxDegreeOfParallelism = MAX_THREAD_LIMIT
System.Threading.Tasks.Parallel.ForEach(queryData, po, row =>
{
}
startIndex = startIndex + offset + 1;
}
Happy coding!
]]>This app launches the main activity with asynctask in background to parse the content to get the free ebook title, image url and render it on the main activity. It also check the site periodically (based on settings) and notify the user.
class MainActivity : AppCompatActivity(),ToolbarManager { |
I have created AlarmManagerHelper class to set the repeating alarm based on the sync frequency settings. The Sync Frequency Settings are stored in SharedPreferences.
internal class AlarmManagerHelper(ctx: Context) : ContextWrapper(ctx) { |
From Android O, there are lot of limitations on background execution limits and you can read all those details in android developer guide. For this project, i need a background job that should run periodically based on frequency settings defined in the app and send the notification to user with the book title. I used JobIntentService to do the background work and send the notification to user
class NotificationService : JobIntentService() { |
I also created BootReceiver with BOOT_COMPLETED intent so that repeating alarm will get set even in the event of phone is restarted.
class BootReceiver : BroadcastReceiver() { |
I used Jsoup Library to parse the HTML content. Jsoup is a Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods.
internal class ParserHelper(ctx: Context) : ContextWrapper(ctx) { |
So, overall this project will give you the idea of how to develop an android app in kotlin including running the background job and send the notitfication to users. I have uploaded the entire source code in github.
Happy Coding
]]>Today, i am going to cover the purpose of web.debug.config and web.release.config files that are present in every web development project in addition to web.config. Usually, we ignore these two files when we deploy the code.
These are called web.config transformation model and it allows you to modify your web.config file in an automated fashion during deployment of your applications to various server environments. So, you can create various config file like Web.Stage.Config, Web.UAT.Config file and modify your configuration settings accordingly.
By default, it allows you to do the following transformation.
You can find out more details about how and where to apply in the Microsoft Documentation.
Happy Coding!
]]>I initially thought of using .Net Core on Windows IOT but then later decided to use Raspian OS and NodeJS becuase i wanted to try NodeJS with some real world application.
Setting up Raspian OS on Raspberry PI was pretty simple. I just followed this excellent article on how to setup Raspian on Raspberry PI. Also, configured the remote connection and shared the work folder to deploy the files so i no longer need my Raspberry PI connected to my TV. I just placed the raspberry PI along with my other stream boxes and connected through my laptop using remote desktop.
I started the Node JS Web API with the following Node Packages to parse the channel 9 feeds.
Express is one of the most famous nodejs web application framework to create Web APIs quick and easy. Cheerio is used for parsing the DOM elements. XMLBuilder is used for construct the xml output in easy way.
To start with, i created the web server with routing as follows
var express = require('express') |
i created the parser library class and added the following code to parse the channel 9 RSS content to fetch the video URLs
|
Thats it. We have just implemented the web API using NodeJS with just few lines of code. if i run my application using http://localhost:4567/ch9, it responds back with xml output.
The deployment is super easy too. Just copy the folder excluding Node_Modules folder. You dont have to deploy the node_modules folder. You can run the NPM Update command to get the node_modules folder from PI Server after deploying it to save some deployment time.
To run the App Server, just open the terminal from Raspian and run using the following command Node APP.js and your personal web server is ready to serve.
I have modified the UrlCategoryFeed in categoryFeed.brs in the roku app before i deploy it. In order to deploy the roku dev app, just follow the roku official guide . Make sure You enable the development mode in roku before deploying it.
Update : Roku recently updated their video channel sample app and the new app can be found here.
]]>