Monday, 24 April 2017

Ways to Improve SQL Query Performance

Avoid Multiple Joins in a Single Query

Try to avoid writing a SQL query using multiple joins that includes outer joins, cross apply, outer apply and other complex sub queries. It reduces the choices for Optimizer to decide the join order and join type. Sometime, Optimizer is forced to use nested loop joins, irrespective of the performance consequences for queries with excessively complex cross apply or sub queries.

Eliminate Cursors from the Query

Try to remove cursors from the query and use set-based query; set-based query is more efficient than cursor-based. If there is a need to use cursor than avoid dynamic cursors as it tends to limit the choice of plans available to the query optimizer. For example, dynamic cursor limits the optimizer to using nested loop joins.

Avoid Use of Non-correlated Scalar Sub Query

You can re-write your query to remove non-correlated scalar sub query as a separate query instead of part of the main query and store the output in a variable, which can be referred to in the main query or later part of the batch. This will give better options to Optimizer, which may help to return accurate cardinality estimates along with a better plan.

Avoid Multi-statement Table Valued Functions (TVFs)

Multi-statement TVFs are more costly than inline TFVs. SQL Server expands inline TFVs into the main query like it expands views but evaluates multi-statement TVFs in a separate context from the main query and materializes the results of multi-statement into temporary work tables. The separate context and work table make multi-statement TVFs costly.

Creation and Use of Indexes

We are aware of the fact that Index can magically reduce the data retrieval time but have a reverse effect on DML operations, which may degrade query performance. With this fact, Indexing is a challenging task, but could help to improve SQL query performance and give you best query response time.

Understand the Data

Understand the data, its type and how queries are being performed to retrieve the data before making any decision to create an index. If you understand the behavior of data thoroughly, it will help you to decide which column should have either a clustered index or non-clustered index. If a clustered index is not on a unique column then SQL Server will maintain uniqueness by adding a unique identifier to every duplicate key, which leads to overhead. To avoid this type of overhead choose the column correctly or make the appropriate changes.

Create a Highly Selective Index

Selectivity define the percentage of qualifying rows in the table (qualifying number of rows/total number of rows). If the ratio of the qualifying number of rows to the total number of rows is low, the index is highly selective and is most useful. A non-clustered index is most useful if the ratio is around 5% or less, which means if the index can eliminate 95% of the rows from consideration. If index is returning more than 5% of the rows in a table, it probably will not be used; either a different index will be chosen or created or the table will be scanned.

Position a Column in an Index

Order or position of a column in an index also plays a vital role to improve SQL query performance. An index can help to improve the SQL query performance if the criteria of the query matches the columns that are left most in the index key. As a best practice, most selective columns should be placed leftmost in the key of a non-clustered index.

Drop Unused Indexes

Dropping unused indexes can help to speed up data modifications without affecting data retrieval. Also, you need to define a strategy for batch processes that run infrequently and use certain indexes. In such cases, creating indexes in advance of batch processes and then dropping them when the batch processes are done helps to reduce the overhead on the database.

Don’t Access More Data Than You Need
Don’t return more columns or rows of data to the client than absolutely necessary. This just increases disk I/O on the server

Try to use with no lock option

Always use the option “SET NOCOUNT ON”
Continue Reading →

Thursday, 20 April 2017

Import MS Excel data to SQL Server table using C#

If you already have data in MS Excel file, and want to migrate your MS Excel data to SQL Server table, follow the below steps:
MY xls file.



CREATE a SQL Table


CREATE TABLE [dbo].[tblProduct](
 [PID] [int] NULL,
 [ProdName] [varchar](50) NULL,
 [ProdDesc] [varchar](50) NULL,
 [ProdCost] [varchar](50) NULL,
 [CREATED_DATE] [datetime] NULL
)

Your MS Excel sheet and SQL table are ready, now it’s time to write C# code to import the Excel sheet into the tblProduct table.

Create a MVC Project. 
In the homecontroller class add these namespaces.


using System.Data.OleDb;
using System.Data.SqlClient;

In the homecontroller class add these variables in class level


        SqlConnection Sqlcon;
        public static string strFileName = "";
        string strfileLocation = "";
        string constr, Query, sqlconn;

In the homecontroller  create a function below.


 public void UploadBulkData(string FilePath)
        {
            string OleDBConnectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + FilePath + ";Extended Properties=Excel 12.0;Persist Security Info=False";
            string Query = string.Empty;
            int BatchId = 0;
            sqlconn = ConfigurationManager.ConnectionStrings["TransactionDB"].ConnectionString;
            Sqlcon = new SqlConnection(sqlconn);
            #region Query
           // Query = @"SELECT * FROM [Sheet1$] WHERE PID IS NOT NULL";
            Query = @"SELECT
                             UCASE(RTRIM(LTRIM(PID))) AS PID,
                             UCASE(RTRIM(LTRIM(ProdName))) AS ProdName,
                             UCASE(RTRIM(LTRIM(ProdDesc))) AS ProdDesc,
                             UCASE(RTRIM(LTRIM(ProdCost))) AS ProdCost
                             FROM [Sheet1$]";
            #endregion

            using (OleDbConnection oledbConn = new OleDbConnection(OleDBConnectionString))
            {
                OleDbCommand cmd = new System.Data.OleDb.OleDbCommand(Query, oledbConn);
                oledbConn.Open();

                DataSet ds = new DataSet();
                OleDbDataAdapter oda = new OleDbDataAdapter(Query, oledbConn);
                oledbConn.Close();
                oda.Fill(ds);
                DataTable Exceldt = ds.Tables[0];

                Exceldt.Columns.Add("CREATED_DATE", typeof(DateTime));
                foreach (DataRow row in Exceldt.Rows)
                {
                    row["CREATED_DATE"] = DateTime.Now.ToShortDateString();
                }

                //creating object of SqlBulkCopy    
                SqlBulkCopy objbulk = new SqlBulkCopy(Sqlcon);

                //assigning Destination table name    
                objbulk.DestinationTableName = "tblProduct";

                // inserting Datatable Records to DataBase
                Sqlcon.Open();
                objbulk.WriteToServer(Exceldt);
                Sqlcon.Close();
            }
        }


Now, It's time to call the above function.

        [HttpPost]
        public ActionResult Index(ProductModel model)
        {
            strfileLocation = @"D:\Projects\WebApplication1\WebApplication1\ProductSheet.xls"; //Server.MapPath("~/ProductSheet.xls");
            UploadBulkData(strfileLocation);
            return View();
        }

When you post the Index function, then it'll import all data from xls to your Sql table.
Continue Reading →

CTE and Temp Table and Table Variable - SQL SERVER

Temp tables

Behave just like normal tables, but are created in the TempDB database. They persist until dropped, or until the connection that created them disappears. They are visible in the procedure that created them and any procedures that that proc calls.

Just like normal tables, they can have primary keys, constraints and indexes, and column statistics are kept for the table. 

Temp tables, while they have space assigned to them in the tempDB database, will generally be accessed only from memory, unless the server is under memory pressure, or the amount of data in the table is large.



CREATE TABLE #LocalTemp
(
 UserID int,
 Name varchar(50), 
 Address varchar(150)
)
GO
insert into #LocalTemp values ( 1, 'Shailendra','Noida');
GO
Select * from #LocalTemp

Table Variables

These tables behave very much like other variables in their scoping rules. They are created when they are declared and are dropped when they go out of scope. They cannot be explicitly dropped.

Like with temp tables, table variables reside in TempDB. they have entries in the system tables in tempDB, just like temp tables, and they follow the same behaviour regarding whether they are in memory or on disk.

Table variables can have a primary key, but indexes cannot be created on them, neither are statistics maintained on the columns. This makes table variables less optimal for large numbers of rows, as the optimiser has no way of knowing the number of rows in the table variable.


 GO
 DECLARE @TProduct TABLE
 (
 SNo INT IDENTITY(1,1),
 ProductID INT,
 Qty INT
 ) 
 --Insert data to Table variable @Product 
 INSERT INTO @TProduct(ProductID,Qty)
 SELECT DISTINCT ProductID, Qty FROM ProductsSales ORDER BY ProductID ASC 
 --Select data
 Select * from @TProduct
 
 --Next batch
 GO
 Select * from @TProduct --gives error in next batch

CTE

CTE stands for Common Table expressions. It was introduced with SQL Server 2005. It is a temporary result set and typically it may be a result of complex sub-query. Unlike temporary table its life is limited to the current query. It is defined by using WITH statement. CTE improves readability and ease in maintenance of complex queries and sub-queries. Always begin CTE with semicolon.



;With CTE1(Address, Name, Age)--Column names for CTE, which are optional
AS
(
SELECT Addr.Address, Emp.Name, Emp.Age from Address Addr
INNER JOIN EMP Emp ON Emp.EID = Addr.EID
)
SELECT * FROM CTE1 --Using CTE 
WHERE CTE1.Age > 50
ORDER BY CTE1.NAME
Continue Reading →

Monday, 17 April 2017

AngularJS : Service vs factory vs provider

Introduction
AngularJS Service, Factory or Provider all are used for the same purpose of creating utility function that can be used throughout the page with inject-able object. However, the way it is created and the way it is used are different. Here we'll try to understand them clearly.

AngularJS Service
The services are singleton objects or functions that carry out specific tasks. It holds some business logic functions.These functions can be called from anywhere; Controllers, Directive, Filters etc.
In the service, we create function names as property with this object.

AngularJS Factory
The purpose of Factory is also same as Service however in this case we create a new object and add functions as properties of this object and at the end we return this object. In other words,
with the factory you actually create an object inside of the factory and return it.  

AngularJS Provider
The purpose of this is again same however Provider gives the output of it's $get function.

One more important link is here
Now let's try to understand Service, Factory and Provider by creating and using them.
Creating Service, Factory and Provider

In below code snippet, we shall see how to create a Service, Factory and a Provider. First, we have created a module. 

Next is the Service creation where we have created a service using .service method. Notice that in the service both functions "Hello" and "Sum" have been created on "this" object.

Then we have created a factory using .factory method. Here, we have created both functions "Hello" and "Sum" on the new object "factoryObject" created and then we are returning that object at the end of the factory method.

In the last, we have created a provider using .provider method in which we are returning an object having two functions "Hello" and "Sum" to the $get function.


var app = angular.module("app", []);

    // create utility function with service
    app.service("myService", function () {
        // here we expose the function on this object
        this.Hello = function () {
            return "Hello";
        };
        this.Sum = function (a, b) {
            return a + b;
        };
    });

    // create utility function with factory
    app.factory("myFactory", function () {
        // here we return the object
        var factoryObject = {};
        factoryObject.Hello = function () {
            return "Hello";
        }
        factoryObject.Sum = function (a, b) {
            return a + b;
        }
        return factoryObject;
    });

    // create utlity function with provider
    app.provider("myProvider", function () {
        this.$get = function () {
            return {
                Hello: function () {
                    return "Hello";
                },
                Sum: function (a, b) {
                    return a + b;
                }
            };
        };
    });

Notice that despite that all three have the same functions "Hello" and "Sum" having same functionality but the way of declaration is different. This is the major difference between Service, Factory and Provider.
Using Service, Factory and Provider

To use them simply inject those into the controller definition and start using those reference to call functions "Hello" and "Sum" defined in them.

Below code snippet is pretty simple. We are simply calling "Hello" and "Sum" functions defined on respective Service, Factory and Provider reference objects.

app.controller("myController", function ($scope, myService, myFactory, myProvider) {
        // service function call
        $scope.ServiceOutput = "Look for service output here.";
        $scope.HelloService = function () {
            $scope.ServiceOutput = myService.Hello();
        };
        $scope.SumService = function () {
            $scope.ServiceOutput = "The sum is : " + myService.Sum(2, 5);
        };

        // factory function call
        $scope.FactoryOutput = "Look for factory output here.";
        $scope.HelloFactory = function () {
            $scope.FactoryOutput = myFactory.Hello();
        };
        $scope.SumFactory = function () {
            $scope.FactoryOutput = "The sum is : " + myFactory.Sum(22, 5);
        };

        // provider function call
        $scope.ProviderOutput = "Look for factory output here.";
        $scope.HelloProvider = function () {
            $scope.ProviderOutput = myProvider.Hello();
        };
        $scope.SumProvider = function () {
            $scope.ProviderOutput = "The sum is : " + myProvider.Sum(22, 52);
        };
    });

Calling functions defined in Service, Factory and Providers

The HTML code looks like below. Where we have three separate sections each for Service, Factory and Provider. On click respective buttons, functions defined in the controller $scope is being called and the output appears on the page.

<div ng-app="app" ng-controller="myController">
    <h3>Using Service</h3>
    <button ng-click="HelloService()">Hello Service</button>
    <button ng-click="SumService()">Sum Service</button>
    <div ng-bind="ServiceOutput"></div>


    <h3>Using Factory</h3>
    <button ng-click="HelloFactory()">Hello Factory</button>
    <button ng-click="SumFactory()">Sum Factory</button>
    <div ng-bind="FactoryOutput"></div>

    <h3>Using Provider</h3>
    <button ng-click="HelloProvider()">Hello Service</button>
    <button ng-click="SumProvider()">Sum Service</button>
    <div ng-bind="ProviderOutput"></div>
</div>

Conclusion
Broadly there is no difference in terms of output/functionality between Service, Factory and Provider however the difference comes when we try to create them. All three have different way of creation and function implementations.

live working code, click here.

Continue Reading →

Friday, 14 April 2017

AngularJS AJAX - $http

AngularJS AJAX - $http
$http is an AngularJS service for reading data from remote servers.

AngularJS $http
The AngularJS $http service makes a request to the server, and returns a response.

 <div ng-app="myApp" ng-controller="myCtrl">   
 <p>Today's welcome message is:</p>  
 <h1>{{myWelcome}}</h1>  
 </div>  
 <script>  
 var app = angular.module('myApp', []);  
 app.controller('myCtrl', function($scope, $http) {  
   $http.get("welcome.htm")  
   .then(function(response) {  
     $scope.myWelcome = response.data;  
   });  
 });  
 </script>  

Methods
The example above uses the .get method of the $http service.

The .get method is a shortcut method of the $http service. There are several shortcut methods:

.delete()
.get()
.head()
.jsonp()
.patch()
.post()
.put()

The methods above are all shortcuts of calling the $http service:

 var app = angular.module('myApp', []);  
 app.controller('myCtrl', function($scope, $http) {  
   $http({  
     method : "GET",  
     url : "welcome.htm"  
   }).then(function mySucces(response) {  
     $scope.myWelcome = response.data;  
   }, function myError(response) {  
     $scope.myWelcome = response.statusText;  
   });  
 });  

The example above executes the $http service with an object as an argument. The object is specifying the HTTP method, the url, what to do on success, and what to do on failure.


Continue Reading →

AngularJS Authentication

AngularJS Authentication with JWT(JSON web tokens)

JWTs provide a way for clients to authenticate every request without having to maintain a session or repeatedly pass login credentials to the server. A JWT consists of three main components: a header object, a claims object, and a signature. These three properties are encoded using base64, then concatenated with periods as separators. Some important things to know about JWT's:

The claims object contains an expiration date which dictates how long the token is valid for
The claims object can also contain custom bits of information such as a user ID
The token is NOT encrypted so anyone with it can read all the properties
The token IS signed by the server so if any of the values are changed, the server will reject it


Continue Reading →

Thursday, 13 April 2017

EmailId Validation IN Sql Server

Create a User defined function to Validate EmailID in Sql Server

Create FUNCTION UDFValidateEmail (@email varChar(255))

RETURNS bit
AS
begin
return
(
select 
Case 
When  @Email is null then 0                  --NULL Email is invalid
When charindex(' ', @email)  <> 0 or --Check for invalid character
charindex('/', @email)  <> 0 or --Check for invalid character
charindex(':', @email)  <> 0 or --Check for invalid character
charindex(';', @email)  <> 0 then 0 --Check for invalid character
When len(@Email)-<= charindex('.', @Email) then 0--check for '%._' at end of string
When  @Email like '%@%@%'or 
@Email Not Like '%@%.%'  then 0--Check for duplicate @ or invalid format
Else 1
END
)
end

Now call the Udf to validate  Email

SELECT dbo.UDFValidateEmail('suraj@dotnetguru.in')

Valid Email returns 1 while Invalid Email returns 0 as output.
Continue Reading →

Sunday, 2 April 2017

Windows Azure and Cloud Computing - A Glance

Windows Azure, which was later renamed as Microsoft Azure in 2014, is a cloud computing platform, designed by Microsoft to successfully build, deploy, and manage applications and services through a global network of datacenters. 

The popular trend in today's technology driven world is ‘Cloud Computing’. Cloud computing can be referred to as the storing and accessing of data over the internet rather than your computer's hard drive.

One prominent example of cloud computing is Office 365 which allows users to store, access, edit their MS Office documents online (in browser) without installing the actual program on their device.



Types of Cloud
The storage options on cloud is in 3 forms −
  • Public
  • Private
  • Hybrid
    Public Cloud − A service provider makes the clouds available to the general public which is termed as a public cloud. These clouds are accessed through internet by users. These are open to public and their infrastructure is owned and operated by service providers as in case of Google and Microsoft.
         Private Cloud − These clouds are dedicated to a particular organization. That particular organization can use the cloud for storing the company's data, hosting business application, etc. The data stored on private cloud can't be shared with other organizations. The cloud is managed either by the organization itself or by the third party.
         Hybrid Cloud − When two or more clouds are bound together to offer the advantage of both public and private clouds, they are termed as Hybrid Cloud. Organizations can use private clouds for sensitive application, while public clouds for non-sensitive applications. The hybrid clouds provide flexible, scalable and cost-effective solutions to the organizations.

Benefits of Cloud

There are many benefits of clouds. Some of them are listed below.
·     Cloud service offers scalability. Allocation and de-allocation of resources is dynamically as per demand.
·    It saves on cost by reducing capital infrastructure.
·    It allows the user to access the application independent of their location and hardware configuration.
· It simplifies the network and lets the client access the application without buying license for individual machine.
·   Storing data on clouds is more reliable as it is not lost easily.

SPI


Next comes how cloud services are categorized. S stand for Software, P stands for Platform and I for Infrastructure in SPI. SaaS is Software as a service; PaaS is Platform as a service and IaaS is Infrastructure as a Service.
Following are the live examples of these models.
·        SAAS Model − E-mail (Gmail, Yahoo, etc.)
·        PASS Model − Microsoft Azure
·        IAAS Model − Amazon S3


Continue Reading →

Topics

ADO .Net (2) Ajax (1) Angular Js (17) Angular2 (24) ASP .Net (14) Azure (1) Breeze.js (1) C# (49) CloudComputing (1) CMS (1) CSS (2) Design_Pattern (3) DI (3) Dotnet (21) Entity Framework (3) ExpressJS (4) Html (3) IIS (1) Javascript (6) Jquery (9) Lamda (3) Linq (11) Mongodb (1) MVC (48) NodeJS (7) RDLC (1) Report (1) Sql Server (29) SSIS (3) SSRS (2) UI (1) WCF (12) Web Api (10) Web Service (1) XMl (1)