If you go to Tools menu choice then Options, you should see a Source Control on the Tree View on left.


On Plug-in Selection you should be able to choice TFS, it not then the TFS connection might have been installed before you setup SSDT. Your software keeper should be able to give you the plug-in needed to activate that.


If  you are not able to choose TFS on the Plug-in Selection, you likely need to download and install Team Explorer for Visual Studio 2012 - this is the plugin that Thomas LeBlanc was talking about. Here's the download page:

Team Explorer for Visual Studio 2010 : http://www.microsoft.com/en-us/download/confirmation.aspx?id=329

Team Explorer for Visual Studio 2012 : http://www.microsoft.com/en-us/download/details.aspx?id=30656

Team Explorer for Visual Studio 2015 : https://www.microsoft.com/en-us/download/details.aspx?id=47727

 

반응형
Posted by Steve Lim
,

The SQLOS was introduced in SQL 2000 and CLR intregration occurred in SQL Server 2005. The database engine has been stable for many releases with few fundamental changes. SQL Server 2016 includes the integration of an R language processor as an external process callable from the SQL Server database engine in much the same way that the CLR integration works. Microsoft indicated during the PASS Summit 2015 that the integration of Revolution R was done to facilitate the addition of other language processors in the future.
Figure 1 shows a detailed block diagram of the SQL Server database engine architecture. We do not discuss every component in the diagram but rather focus on the areas that will help you understand the relationship between SQL Server and the storage components that it accesses during operations.

 

 

 

The four major components of the SQL Server architecture are: protocol layer, SQLOS, query processor (relational engine), and storage engine.

Protocol layer

The protocol layer manages the communication between clients and the database engine. .NET libraries on both the client and server computers use a Microsoft defined messaging format called Tabular Data Stream (TDS) to exchange requests and results. The protocol layer encapsulates TDS in a standard communication protocal such as TCP/IP or Named Pipes. When the TDS messages that orginate from the client are unpacked from the communication protocol, they are handed off to a command processor that is part of the relational engine. Results sets sent back to the client as TDS are unpacked and processed by the client application.

SQLOS

SQLOS is an abstraction layer that lies between the SQL Server database engine, Windows and any external components such as the CLR and the R language processor. It is responsible for functions including thread scheduling and memory management typically performed by Windows for other applications. SQLOS does not have any abstration for I/O requests. Threads used by SQL Server issue mostly asynchronous read and write requests directly to Windows and then wait for the results.
The SQLOS queries Windows at startup to detemine the number and type of CPUs (NUMA, non-NUMA). It uses this information to create theads and schedulers to handle multiple simultaneous requests while it hides the details from Windows to prevent unnecessary context switching for long running processes.
The database engine constantly communicates with Windows through the SQLOS to request and respond to memory allocation adjustment requests. Windows is responsible for negotiating memory allocations for all processes running on the server and makes adjustments based on requrests from applications as well as its own needs and the total amount of memory available to Windows. Windows grants memory requests from SQL Server as long as there are sufficient available memory resources on the server. If Windows receives more requests for memory than it can grant, the OS may try to negotiate with SQL Server to reduce the amount of memory allocated.
The SQLOS manages its memory resources dynamically in negotiation with Windows using an internal object called the buffer pool. All memory in the buffer pool that is not used by an internal process such as the procedure cache or client connections, and so on, is allocated to a data cache used by the storage engine for buffering data and index pages. The data cache is typically the largest consumer of memory from the buffer pool. The SQLOS uses a memory broker to efficiently adjust memory allocations from the buffer pool to the many internal services that need dynamic memory allocations.

Query processor

The Query Processor, shown in Figure 1, is also referred to as the relational engine.
The main responsibilites of the relational engine are:

  • Validating T-SQL statements.
  • Parsing SQL statements by breaking them down into keywords, parameters, operators, and identifiers, and creating a series of smaller logical operations.
  • Optimizing the execution plan, which consists of finding an acceptable plan from the list of candidate plans that it determines can perform the tasks required. The relational engine estimates the cost of the processing steps based on internal metrics including estimated memory usage, CPU utilization, and the number of required I/Os based on statistics for a set of competing plans until further optimziation is determined to be more expensive than execution. The optimizer does not guarantee that the selected plan is the best but is good enough to indicate that further optimization is not warranted. Plans that are used from cache and plans that are considered trival require optimization.
  • Processing Data Definition Language (DDL) and other statements, such as SET statements, to set connection options and the CREATE statements to create objects in a database.
  • Formatting results returned to the client. The results are formatted as either a traditional, tabular result set or as an XML document. The results are then encapsulated in one or more TDS packets and returned to the application.

Storage engine

The SQL Server storage engine interacts with the relational engine to provide services to end users. From the perspective of the user and the DBA, the functioning of the storage and relational engines are indistinguishable. However, for IT professionals who design and manage applications, a basic understanding of these internals can be instrumental in understanding SQL Server behavior and problem troubleshooting.

  • The main functions of the storage engine are:
  • Managing the data cache buffers and I/O to the physical files
  • Controlling concurrency, managing transactions, locking, and logging
  • Managing the files and physical pages used to store data
  • Recovering from system faults

The relational engine decides which data satisfies a request and the storage engine makes the data available. The storage engine is also responsible for maintaing data integrity to prevent simulaneous requests from interfering with each other.

This high level time line shows how the relational engine and the storage engine work together to satisfiy a request:

  1. Data access activity begins with a query, whether it originates from a user interface or from an automated task. The data request is passed from the protocal stack into the relational engine.
  2. The relational engine compiles and optimizes the request into an execution plan. The plan consists of a series of steps that is required to retrieve and process the data into a result that the client can consume.
  3. The relational engine runs the execution plan. The execution steps that involve accessing tables and indexes are sent to the storage engine.
  4. The storage engine returns data to the relational engine where it is combined and processed into the final result set and returned to the protcol stack.
  5. The result set is sent back to the user via the protocol stack.

SQL Server logical components

The SQL Server database engine was originally designed to effectively support normalized database design. A number of enhancements, especially in the last few releases, have greatly improved performance for data warehouse workloads which are typically de-normalized.

Normalization is the process of removing redundancies from the data. Transact-SQL queries then recombine the table data using relational join operations. By avoiding the need to update the same data in multiple places, normalization improves the efficiency of an application and reduces the opportunities to introduce errors due to inconsistent data. The SQL Server logical architecture defines how the data is logically grouped and presented to the users. The core components in this architecture are:

  • Tables: Tables consist of one or more data pages. The table columns define the number and type of data that may be stored in the table. Each instance of the columns is stored as a row. The rows of a table may be ordered on the data pages and stored on disk according to the value of a clustered index key. Tables that do not have a clustered index key are stored on pages and disk in an unordered structure, also known as a heap. The storage of tables with a clustered index uses a binary tree or b-tree structure.
  • Indexes: A non-clustered index defines a key value made up of one or more columns of a table and stored as a b-tree. Additional data from the table that is not part of the index key can also be included in the b-tree. An index can speed up access to data when the data can be searched for by the value of index key. Additional performance gains can be attained if all the data required by the query is contained in the leaf of an index as either part of the key or non-key included columns. This prevents the relational engine from performing an additional lookup from the parent table.
    Indexes require additional storage and maintenance that can be non-trivial and can adversely impact Insert, Update, and Delete performance. Indexes that are rarely or never used by the optimizer have costs with no corresponding benefit.
  • Views: A view is a virtual table or a stored query. Views primarily assist application developers by reducing the number and complexity of queries to retrieve commonly accessed data that requires multiple tables to be joined, sorted and or filtered. A view may also be indexed and potentially used by the optimizer to speed up data access.
  • Stored procedures: A stored procedure is a group of Transact-SQL statements compiled and stored in a single execution plan. Coding business logic into stored procedures creates a single point of control to ensure that business rules are correctly enforced. The use of stored procedures and proper parameterization is considered a best practice because of the execution plan efficiency, and prevention of SQL Injection
    Stored procedures can also improve performance through the reuse of cached execution plans. SQL Server has an efficient algorithm to find any existing execution plans for any specific SQL statement. Cached execution plans can become stale or not be optimized for all values of user supplied input parameters. There have been enhancements to how parameters are treated for cached plans such as optimize for unknown optimizer hint. SQL Server provides
    SQL Server logical components numerous performance monitor counters and dynamic management views that you can access to determine if your stored procedures affect performance positively or negatively.
  • Constraints: Constraints are commonly used to enforce data integrity in a database including referential, data, and unique keys or indexes.
  • User-defined functions: Like functions used in other programming languages, SQL Server supports user-defined functions as named routines that accept parameters, perform an action, such as a complex calculation, and return the result of that action as a value. The return value can either be a single scalar value or a result set.
  • Triggers: A trigger is a stored procedure that automatically executes when an event occurs in the database server. DML triggers execute when a user tries to modify data through a data manipulation language (DML) event. DDL triggers execute in response to a variety of data definition language (DDL) events. These events primarily correspond to Transact-SQL CREATE, ALTER, and DROP statements, and certain system stored procedures that perform DDL-like operations.

Not all database designs follow strict normalization rules. A database that is used primarily for decision support (as opposed to update-intensive transaction processing) may not process many redundant updates. The database structure may be more understandable and efficient for decision support queries if the design is not fully normalized. De-normalization can also help the query optimizer be more efficient for typical data warehouse queries.

The impact of database design on performance cannot be overstated. Databases that are not normalized are a more common design problem for OLTP workloads than having data structures that are over-normalized. Starting with a normalized design and then selectively de-normalizing tables for specific reasons may be the best strategy.

SQL Server physical components

The SQL Server physical components determine how the data is stored in the file system of the operating system. The selection of the number and types of table columns and index design has a major impact on the requirements for physical storage.

Database file types

SQL Server uses three types of files:

  • Primary data files: Every database has one primary data file that stores data as well as information about other files used by the database.
  • Secondary data files: A database can have zero or more secondary data files. Secondary data files are not required, and a database can have many secondary files or none. By convention, a secondary data file has an .NDF extension.
  • Log files: Each database has at least one or more log files independent of the number of data files. Log files store the write ahead transaction log information that is needed to recover transactions for the database. By convention, a transaction log file has an .LDF extension.

Data files

Data files store the 8K pages used by SQL Server. A SQL Server page is the basic unit of logical data storage. A page begins with a 96-byte header that contains system information about the page. The disk space allocated to the primary or secondary data files (MDF or NDF) is logically divided into pages.

The most common types of pages that can be allocated to a data file are:

  • Data pages
  • LOB pages
  • Index pages
  • Page free space (PFS) pages
  • Global allocation map and shared global allocation map (GAM and SGAM) pages
  • Index allocation map (IAM) pages
  • Bulk change map (BCM) pages
  • Differential change map (DCM) pages

Figure 2 shows the relationship of these major page types in a data file:

 

 

Extents

Extents are the basic units for allocation of space. Each extent has eight physically adjacent pages (64 KB). A new table or index is usually allocated pages from mixed extents. Uniform extents are used for subsequent allocations after the object grows beyond eight pages.

Some SQL Server features use extents. For example, database snapshots allocate new space in their associated NTFS sparse files using extents, even if only one page in the extent has changed. This allows SQL Server to put pages for subsequent changes from that extent in an adjacent location on disk to improve performance when reading from the sparse file. Also, when using differential backups, SQL Server uses the differential change map pages in the data file to identify any extents that have been modified since the last full back. The backup engine then copies those changed extents to the differential backup file. On restore, the full backup is used to create a new copy of the database and then changed extents that are stored in the differential backup are used to overwrite the extents from the full backup that have changed.

File groups

Every database has a PRIMARY file group. Most databases have only the PRIMARY file group and one data file. User-defined file groups can be created to group data files together for administrative, data allocation, and placement purposes. A file group can contain one or more data files.

At any time, one file group is designated as the default file group. When objects are created in the database without being assigned to a file group, they are assigned to the default file group. The files in the default file group must be large enough to hold any new objects not allocated to other file groups. The PRIMARY file group is the default file group unless it is changed by using the ALTER DATABASE statement.

Historically, when SQL Server was deployed on servers that had a limited number of direct-attached disks, DBAs used multiple file groups and files to spread I/O across physical disks. It was common to create large tables and their corresponding indexes on different file groups so that the files could be allocated to different physical disks.

With the widespread adoption of RAID controllers and intelligent storage arrays, the added complexity of such detail object placement does not lead to better performance. There are, however, several reasons to use multiple files and file groups for a database including:

  • Use of In-Memory tables requires the creation of a Memory Optimized file group.
  • Filestream requires a filestream file group.
  • Create multiple equal sized files in one file group because of a high page allocation rate. SQL Server will spread new allocations for objects created this way using a proportional fill algorithm. This technique is widely used for TEMPDB data files for instances that have high object creation and deletion rates. This can alleviate wait times associated with page latch waits on the data file allocation pages. It can also be necessary for user databases that have similar characteristics.
  • Separate table and index partitions into read/write and read-only file groups for data warehouse applications. Read-only file groups only need to be backed up once. Subsequent backups can ignore the read-only file groups because that data does not change.
  • SQL Server Enterprise Edition includes a feature that allows for piecemeal recovery of the database using multiple file groups. On restore, SQL Server Enterprise Edition can make the database available after the primary file group has been brought online. The other file groups can be brought online in any order.
  • Use file group for partitioned table management. Partitions can be assigned to different file groups using different classes of storage. Partitions can be switched in and out of the table for better data loading and archiving.

Transaction logs

The transaction log records changes made to a database and stores enough information to allow SQL Server to recover the database. Recovery reconciles the data in the data files with changes recorded in the transaction log. The recovery process happens every time the server instance is restarted and optionally when a database or log restore occurs.

Physically, the transaction log consists of one or more files configured as a “circular log”. If multiple log files exist you can think of them as being a single concatenated space. There are no parallel log operations when multiple files are used and typically no performance advantages. The use of multiple log files usually occurs when the existing log file needs to be extended but there is no allocated space on the Windows device. By adding another log file, the additional space is available to the log writer.

The buffer manager in the storage engine guarantees that the log file will be written to before the change is made to the database (called write ahead logging). Writes to the log are asynchronous, however, the storage engine must receive a successful response from the log write operation at the end of a transaction before acknowledgement is made to the client that the request was successful. The log write buffer can hold up to 60K of log data. The buffer is flushed at the completion of each transaction or when the buffer is full, whichever occurs first.

Information in the log is stored in variable length records that are uniquely identified by a Log Sequence Number. The log file contains information regarding:

  • The start and end of each transaction
  • Data modifications
  • Extent and page allocations and de-allocations
  • Creation and elimination of a table or index

Each log file consists of a number of virtual log files based on the initial size of all transaction log files and the growth increment set for auto expansion. Virtual log files are the smallest unit of truncation for the transaction log. When a virtual log file no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions. Figure 3 shows how circular logging uses multiple virtual logs in a single physical file.

 

 

 

 

Source : Microsoft SQL Server Best Practices and Design Guidelines for EMC Storage Solution Guide - Part number H14621 (December 2015, EMC Corporation)

반응형
Posted by Steve Lim
,

-- Create Test DB
CREATE DATABASE [DBName]
GO
-- Take the Database Offline
ALTER DATABASE [DBName] SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
-- Take the Database Online
ALTER DATABASE [DBName] SET ONLINE
GO
-- Clean up
DROP DATABASE [DBName]
GO

 

반응형
Posted by Steve Lim
,

SQL Server 2016 has improved the HA/DR solution - AlwaysOn, and also adds this great feature in Standard Edition with some limitation. AlwaysOn in SQL Server 2016 Standard Edition is very similar with the mirroring feature which was deprecated on SQL Server 2012.

This is the some limitation of AlwaysOn in SQL Server 2016 Standard Edition.

 

 

Actually, SQL Server 2016 is not released yet. Current released version is SQL Server 2016 RC3(April 15, 2016), and it has only Evaluation(Enterprise), Developer, and Express Edition. So I couldn't confirm the limitation of this feature on Standard Edition.

 

So I just would like to introduce you how to install and configure the AlwaysOn Availability Group with SQL Server 2016 Evaluation Edition.

 

1. Set up Windows Failover clustering

 

 

The big difference between mirroring and AlwaysOn is the Failover clustering. Mirroring doesn't use it, but AlwaysOn use it. Let's get started to install the Failover clustering on Windows Server 2012. SQL Server 2016 support Windows Server 2012 or higher version.

Install Failover clustering feature on the first node.

 

 

 

 

 

 

And do same thing on the second node.

Then open the Failover Cluster Manager on any node.

 

 

 

Click Validate configuration.

 

 

Add nodes to join this new cluster.

 

 

 

 

After finish this validation, please review carefully the report if there is any error message.

 

 

Now it's time to create new cluster if there is no error message on the report.

 

 

 

AlwaysOn doesn't need any shared disk on the Failover clustering, so when new cluster is created, "Add all eligible storage to the cluster" should be unchecked.

 

 

When the cluster creation is finished, the Failover Cluster Manager will look like this.

 

 

2. Install SQL Server 2016 RC3

 

 

It's same as previous version. SQL Server 2016 doesn't need .Net Framework 3.5. Instead, it uses .Net Framework 4.0.

 

 

 

 

 

 

 

 

 

SQL Server 2016 installation image doesn't have SQL Server Management Studio(SSMS) and SQL Server Data Tools(SSDT). These tools need to be downloaded separately. If you click "Install SQL Server Management Tools", the installer will open the web browser so you can download the tool.

 

 

This is new SQL Server Management Studio. Its theme looks like Visual Studio 2012.

 

 

 

3. Set up AlwaysOn Availability Group

 

 

First, enable AlwaysOn Availability Groups on SQL Server service's properties.

 

 

 

 

 

I will configure AlwaysOn on the database "AG_Test".

 

 

 

If the second node doesn't have same database on the instance, SQL Server 2016 will create same database during the configuration of AlwaysOn. But if the database is huge, the AlwaysOn configuration time will be quite long. So I backed up this database on the first node and then restored it on the second node.

 

 

Create new Availability Group on the first node.

 

 

 

 

 

The target database is already copied and restored on the second node, so "Join Only" will be chosen. If you don't have the same database on the second node, "Full" should be selected.

 

 

 

Now it's done!

 

 

When you see the Availability Group's Dash board, it will look like this.

 

 

Reference : Introducing Microsoft SQL Server 2016 Preview2 - Stacia Varga, Denny Cherry, Joseph D'Antoni

 

 

 

 

 

 

 

 

 

 

 

반응형
Posted by Steve Lim
,

Here are the code could help to find all stored procedures(SP) related to specific table.

#1. 
SELECT DISTINCT so.name
FROM syscomments sc
INNER JOIN sysobjects so ON sc.id=so.id
WHERE sc.TEXT LIKE '%tablename%'
#2. 
SELECT DISTINCT o.name, o.xtype
FROM syscomments c
INNER JOIN sysobjects o ON c.id=o.id
WHERE c.TEXT LIKE '%tablename%'


반응형
Posted by Steve Lim
,

SELECT CASE WHEN (GROUPING(sob.name)=1) THEN 'All_Tables'    ELSE ISNULL(sob.name, 'unknown') END AS Table_name,    SUM(sys.length) AS Byte_Length FROM sysobjects sob, syscolumns sys WHERE sob.xtype='u' AND sys.id=sob.id GROUP BY sob.name WITH CUBE

반응형
Posted by Steve Lim
,

SELECT dbschemas.[name] as 'Schema',

dbtables.[name] as 'Table',

dbindexes.[name] as 'Index',

indexstats.avg_fragmentation_in_percent,

indexstats.page_count

FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats

INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]

INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]

INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]

AND indexstats.index_id = dbindexes.index_id

WHERE indexstats.database_id = DB_ID()

ORDER BY indexstats.avg_fragmentation_in_percent desc;

반응형
Posted by Steve Lim
,

Here are the code could help to find all stored procedures(SP) related to specific table.

SELECT
K_Table = FK.TABLE_NAME,
FK_Column = CU.COLUMN_NAME,
PK_Table = PK.TABLE_NAME,
PK_Column = PT.COLUMN_NAME,
Constraint_Name = C.CONSTRAINT_NAME
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME
INNER JOIN (
SELECT i1.TABLE_NAME, i2.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME
WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'
) PT ON PT.TABLE_NAME = PK.TABLE_NAME
---- optional:
ORDER BY
1,2,3,4
WHERE PK.TABLE_NAME='something'WHERE FK.TABLE_NAME='something'
WHERE PK.TABLE_NAME IN ('one_thing', 'another')
WHERE FK.TABLE_NAME IN ('one_thing', 'another')


SELECT K_Table = FK.TABLE_NAME, FK_Column = CU.COLUMN_NAME, PK_Table = PK.TABLE_NAME, PK_Column = PT.COLUMN_NAME, Constraint_Name = C.CONSTRAINT_NAME FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME INNER JOIN ( SELECT i1.TABLE_NAME, i2.COLUMN_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1 INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY' ) PT ON PT.TABLE_NAME = PK.TABLE_NAME ---- optional: ORDER BY 1,2,3,4 WHERE PK.TABLE_NAME='something'WHERE FK.TABLE_NAME='something' WHERE PK.TABLE_NAME IN ('one_thing', 'another') WHERE FK.TABLE_NAME IN ('one_thing', 'another')

반응형
Posted by Steve Lim
,

These days I'm digging into how to collect and analyze wait events on SQL Server database. So I simply have created a stored procedure to collect wait event information and have set it on SQL Agent job to run every 5 or 10 minutes. Then using same procedure with different parameter, I was able to get the analyzed wait event information. Also I reviewed Paul S. Randal's post (http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/) for this stored procedure. Thanks Paul.


After collecting wait event information, I can get meaningful data to troubleshoot database issues from here.



Here is the procedure code. Please let me know if you have any comment on this.


/*dba_WaitsCollecting_V1 =====================================================
  File:     dba_WaitsCollecting.sql
 
  Summary:  Collect Wait stats and session information
 
  SQL Server Versions: 2005 onwards
------------------------------------------------------------------------------
  1/17/2017 : Written by Steve Lim
------------------------------------------------------------------------------
  Usage : -- Collecting information
          [dbo].[dba_WaitsCollecting]  
    -- Analyzing information
          [dbo].[dba_WaitsCollecting] 0, '2017-01-17 09:00:00', '2017-01-18 17:00:00'
============================================================================*/
CREATE PROCEDURE [dbo].[dba_WaitsCollecting]
  @isCollect bit = 1 /* default : 1 (1: collect, 0: analyze) */
  , @beginTime datetime = NULL /* default : getdate() - 1 */
  , @endTime   datetime = NULL /* default : getdate() */
AS

Set NoCount On;
Set XACT_Abort On;
Set Quoted_Identifier On;

IF ISNULL(OBJECT_ID('dba_WaitTasksCollected'),0) = 0 BEGIN
 CREATE TABLE [dbo].[dba_WaitTasksCollected](
        [Num]       [int] IDENTITY(1,1) NOT NULL,
        [TimeCollected]          [nvarchar](24) DEFAULT(CONVERT(VARCHAR, GETDATE(), 112) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(HOUR,GETDATE())),2) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(MINUTE,GETDATE())),2)),
  [SPID]      [smallint] NULL,
  [Thread]    [int]      NULL,
  [Scheduler] [int]      NULL,
  [Wait_ms]   [bigint]   NULL,
  [Wait_type] [nvarchar](60) NULL,
  [Blocking_SPID] [smallint] NULL,
  [Resource_description] [nvarchar](3072) NULL,
  [Node_ID]   [nvarchar](3072) NULL,
  [DOP]       [smallint] NULL,
  [DBID]      [smallint] NULL,
  [Help_Info_URL] [XML] NULL,
  [Query_plan]    [XML] NULL,
  [Text]      [nvarchar](max) NULL,
  CONSTRAINT [PK_WaitTasksCollected] PRIMARY KEY CLUSTERED 
 (
  [Num] ASC
 )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
 ) ON [PRIMARY]
END

IF ISNULL(OBJECT_ID('dba_WaitsCollected'),0) = 0 BEGIN
 CREATE TABLE [dbo].[dba_WaitsCollected](
        [Num]                    [int] IDENTITY(1,1) NOT NULL,
        [TimeCollected]          [nvarchar](24) DEFAULT(CONVERT(VARCHAR, GETDATE(), 112) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(HOUR,GETDATE())),2) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(MINUTE,GETDATE())),2)),
  [wait_type]              [nvarchar](60) NOT NULL,
  [waiting_tasks_count]    [bigint]       NOT NULL,
  [wait_time_ms]           [bigint]       NOT NULL,
  [max_wait_time_ms]       [bigint]       NOT NULL,
  [signal_wait_time_ms]    [bigint]       NOT NULL,
  CONSTRAINT [PK_WaitsCollected] PRIMARY KEY CLUSTERED 
 (
  [Num] ASC
 )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
 ) ON [PRIMARY]
END

BEGIN

IF @isCollect = 1  -- COLLECT
BEGIN

 INSERT INTO dba_WaitTasksCollected ([SPID],[Thread],[Scheduler],[Wait_ms],[Wait_type],[Blocking_SPID]
                ,[Resource_description],[Node_ID],[DOP],[DBID],[Help_Info_URL],[Query_plan],[Text])
 SELECT 
  [owt].[session_id] AS [SPID],
  [owt].[exec_context_id] AS [Thread],
  [ot].[scheduler_id] AS [Scheduler],
  [owt].[wait_duration_ms] AS [Wait_ms],
  [owt].[Wait_type],
  [owt].[blocking_session_id] AS [Blocking_SPID],
  [owt].[Resource_description],
  CASE [owt].[Wait_type]
   WHEN N'CXPACKET' THEN
    RIGHT ([owt].[Resource_description],
     CHARINDEX (N'=', REVERSE ([owt].[Resource_description])) - 1)
   ELSE NULL
  END AS [Node_ID],
  [eqmg].[dop] AS [DOP],
  [er].[database_id] AS [DBID],
  CAST ('https://www.sqlskills.com/help/waits/' + [owt].[wait_type] as XML) AS [Help_Info_URL],
  [eqp].[Query_plan],
  [est].text as [Text]
 FROM sys.dm_os_waiting_tasks [owt]
 INNER JOIN sys.dm_os_tasks [ot] ON
  [owt].[waiting_task_address] = [ot].[task_address]
 INNER JOIN sys.dm_exec_sessions [es] ON
  [owt].[session_id] = [es].[session_id]
 INNER JOIN sys.dm_exec_requests [er] ON
  [es].[session_id] = [er].[session_id]
 FULL JOIN sys.dm_exec_query_memory_grants [eqmg] ON
  [owt].[session_id] = [eqmg].[session_id]
 OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
 OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
 WHERE
  [es].[is_user_process] = 1
 ORDER BY
  [owt].[session_id],
  [owt].[exec_context_id];


 INSERT INTO dba_WaitsCollected ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms])
 SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms] FROM sys.dm_os_wait_stats;

END

IF @isCollect = 0 -- ANALYZE
BEGIN

 CREATE TABLE #dba_WaitsDelta (
  wait_type  nvarchar(60) NULL,
  WaitS      bigint NULL,
  ResourceS  bigint NULL,
  SignalS    bigint NULL,
  WaitCount  bigint NULL,
  Percentage int    NULL,
  RowNum     smallint NULL,
  TimeAnalyzed nvarchar(80) NULL
 )

 DECLARE @beginSnap NVARCHAR(24), @endSnap NVARCHAR(24);
 DECLARE @beginTSnap NVARCHAR(24), @endTSnap NVARCHAR(24);
-- DECLARE @beginTime datetime, @endTime datetime;

 IF @beginTime is NULL SELECT @beginTime = getdate() - 1;
 IF @endTime is NULL   SELECT @endTime = getdate();

 SELECT @beginTSnap = CONVERT(VARCHAR, @beginTime, 112) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(HOUR,@beginTime)),2) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(MINUTE,@beginTime)),2)
 SELECT @endTSnap = CONVERT(VARCHAR, @endTime, 112) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(HOUR,@endTime)),2) + RIGHT('00' + CONVERT(VARCHAR,DATEPART(MINUTE,@endTime)),2)

 SELECT @beginSnap = CONVERT(VARCHAR,MIN(TimeCollected)), @endSnap = CONVERT(VARCHAR,MAX(TimeCollected)) 
 FROM   [dbo].[dba_WaitsCollected]
 WHERE  TimeCollected between @beginTSnap and @endTSnap;

 INSERT INTO #dba_WaitsDelta (wait_type, WaitS, ResourceS, SignalS, WaitCount, Percentage, RowNum, TimeAnalyzed)
 SELECT ows1.wait_type as [wait_type]
       , (ows2.wait_time_ms - ows1.wait_time_ms) / 1000.0 as [WaitS]
    , ((ows2.wait_time_ms - ows1.wait_time_ms) - (ows2.signal_wait_time_ms - ows1.signal_wait_time_ms)) / 1000.0 as [ResourceS]
    , (ows2.signal_wait_time_ms - ows1.signal_wait_time_ms) / 1000.0 as [SignalS]
       , (ows2.waiting_tasks_count - ows1.waiting_tasks_count) as [WaitCount]
       , 100.0 * (ows2.wait_time_ms - ows1.wait_time_ms) / SUM (ows2.wait_time_ms - ows1.wait_time_ms) OVER() as [Percentage]
    --, (ows2.signal_wait_time_ms - ows1.signal_wait_time_ms) as [signal_wait_time_ms]
    , ROW_NUMBER() OVER(ORDER BY (ows2.wait_time_ms - ows1.wait_time_ms) DESC) as [RowNum]
    , ows1.TimeCollected + '-' + ows2.TimeCollected as TimeAnalyzed
 FROM 
  (select ows.wait_type, ows.waiting_tasks_count, ows.wait_time_ms, ows.signal_wait_time_ms, ows.TimeCollected
   from   [dbo].[dba_WaitsCollected]  ows
   where  ows.TimeCollected = (@beginSnap)
  ) ows1 inner join (
   select ows.wait_type, ows.waiting_tasks_count, ows.wait_time_ms, ows.signal_wait_time_ms, ows.TimeCollected
   from   [dbo].[dba_WaitsCollected]  ows
   where  ows.TimeCollected in (@endSnap)
  ) ows2 on ows1.wait_type = ows2.wait_type
 WHERE ows1.wait_type NOT IN (
   N'BROKER_EVENTHANDLER', N'BROKER_RECEIVE_WAITFOR',
   N'BROKER_TASK_STOP', N'BROKER_TO_FLUSH',
   N'BROKER_TRANSMITTER', N'CHECKPOINT_QUEUE',
   N'CHKPT', N'CLR_AUTO_EVENT',
   N'CLR_MANUAL_EVENT', N'CLR_SEMAPHORE',

   -- Maybe uncomment these four if you have mirroring issues
   N'DBMIRROR_DBM_EVENT', N'DBMIRROR_EVENTS_QUEUE',
   N'DBMIRROR_WORKER_QUEUE', N'DBMIRRORING_CMD',
 
   N'DIRTY_PAGE_POLL', N'DISPATCHER_QUEUE_SEMAPHORE',
   N'EXECSYNC', N'FSAGENT',
   N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'FT_IFTSHC_MUTEX',
  
   -- Maybe uncomment these six if you have AG issues
   N'HADR_CLUSAPI_CALL', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
   N'HADR_LOGCAPTURE_WAIT', N'HADR_NOTIFICATION_DEQUEUE',
   N'HADR_TIMER_TASK', N'HADR_WORK_QUEUE',

   N'KSOURCE_WAKEUP', N'LAZYWRITER_SLEEP',
   N'LOGMGR_QUEUE', N'MEMORY_ALLOCATION_EXT',
   N'ONDEMAND_TASK_QUEUE',
   N'PREEMPTIVE_XE_GETTARGETSTATE',
   N'PWAIT_ALL_COMPONENTS_INITIALIZED',
   N'PWAIT_DIRECTLOGCONSUMER_GETNEXT',
   N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP', N'QDS_ASYNC_QUEUE',
   N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEE', N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP',
   N'QDS_SHUTDOWN_QUEUE', N'REDO_THREAD_PENDING_WORK',
   N'REQUEST_FOR_DEADLOCK_SEARCH', N'RESOURCE_QUEUE',
   N'SERVER_IDLE_CHECK', N'SLEEP_BPOOL_FLUSH',
   N'SLEEP_DBSTARTUP', N'SLEEP_DCOMSTARTUP',
   N'SLEEP_MASTERDBREADY', N'SLEEP_MASTERMDREADY',
   N'SLEEP_MASTERUPGRADED', N'SLEEP_MSDBSTARTUP',
   N'SLEEP_SYSTEMTASK', N'SLEEP_TASK',
   N'SLEEP_TEMPDBSTARTUP', N'SNI_HTTP_ACCEPT',
   N'SP_SERVER_DIAGNOSTICS_SLEEP', N'SQLTRACE_BUFFER_FLUSH',
   N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
   N'SQLTRACE_WAIT_ENTRIES', N'WAIT_FOR_RESULTS',
   N'WAITFOR', N'WAITFOR_TASKSHUTDOWN',
   N'WAIT_XTP_RECOVERY',
   N'WAIT_XTP_HOST_WAIT', N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG',
   N'WAIT_XTP_CKPT_CLOSE', N'XE_DISPATCHER_JOIN',
   N'XE_DISPATCHER_WAIT', N'XE_TIMER_EVENT')
 AND (ows2.waiting_tasks_count - ows1.waiting_tasks_count) > 0;

 SELECT
  MAX ([W1].[wait_type]) AS [WaitType],
  CAST (MAX ([W1].[WaitS]) AS DECIMAL (16,2)) AS [Wait_S],
  CAST (MAX ([W1].[ResourceS]) AS DECIMAL (16,2)) AS [Resource_S],
  CAST (MAX ([W1].[SignalS]) AS DECIMAL (16,2)) AS [Signal_S],
  MAX ([W1].[WaitCount]) AS [WaitCount],
  CAST (MAX ([W1].[Percentage]) AS DECIMAL (5,2)) AS [Percentage],
  CAST ((MAX ([W1].[WaitS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgWait_S],
  CAST ((MAX ([W1].[ResourceS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgRes_S],
  CAST ((MAX ([W1].[SignalS]) / MAX ([W1].[WaitCount])) AS DECIMAL (16,4)) AS [AvgSig_S],
  CAST ('https://www.sqlskills.com/help/waits/' + MAX ([W1].[wait_type]) as XML) AS [Help/Info URL],
  MAX(W1.TimeAnalyzed) as TimeAnalyzed
 FROM #dba_WaitsDelta AS [W1]
 INNER JOIN #dba_WaitsDelta AS [W2]
  ON [W2].[RowNum] <= [W1].[RowNum]
 GROUP BY [W1].[RowNum]
 HAVING SUM ([W2].[Percentage]) - MAX( [W1].[Percentage] ) < 97; -- percentage threshold

 DROP TABLE #dba_WaitsDelta;

END

END


반응형
Posted by Steve Lim
,

How to automate backups of SSAS cubes!

This article provides how to automate backups of SSAS cubes using Windows PowerShell.

 

1. Create text files and scipts on C:\Scripts\

a. CubeList.txt

EnterpriseDW
TrackingDW
ETLDW


b. Backup_SSAS.ps1

$ServerName=”localhost”
$loadInfo = [System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.AnalysisServices”) | Out-Null
## Add the AMO namespace
[Microsoft.AnalysisServices.BackupInfo]$serverBackup = New-Object ([Microsoft.AnalysisServices.BackupInfo])
[Microsoft.AnalysisServices.Server]$server = New-Object Microsoft.AnalysisServices.Server
$server.connect($ServerName)
If ($server.name -eq $null)
{
Write-Output (“Server ‘{0}’ not found” -f $ServerName)
break
}
$DBList = Get-Content “c:\Scripts\CubeList.txt”
Foreach($DBName in $DBList)
{
$DB = $server.Databases.FindByName($DBName)
if ($DB -eq $null)
{
Write-Output (“Database ‘{0}’ not found” -f $DBName)
}
else
{
Write-Output(“—————————————————————-“)
Write-Output(“Server : {0}” -f $Server.Name)
Write-Output(“Database: {0}” -f $DB.Name)
Write-Output(“DB State: {0}” -f $DB.State)
Write-Output(“DB Size : {0}MB” -f ($DB.EstimatedSize/1024/1024).ToString(“#,##0″))
Write-Output(“—————————————————————-“)
#$BackupDestination=$server.ServerProperties.Item(“BackupDir”).value
$BackupDestination=”\\backup\dbbackup\SQL_Server\SSAS\” + $DB.Name
$serverBackup.AllowOverwrite = 1
$serverBackup.ApplyCompression = 1
$serverBackup.BackupRemotePartitions = 1
if (-not $backupDestination.EndsWith(“\”))
{
$backupDestination += “\”
}
[string]$backupTS = Get-Date -Format “yyyyMMddHHmm”
$serverBackup.file = $backupDestination + $db.name + “_” + $backupTS + “.abf”
$serverBackup.file
$db.Backup($serverBackup)
if ($?) {“Successfully backed up ” + $db.Name + ” to ” + $serverBackup.File }
else {“Failed to back up ” + $db.Name + ” to ” + $serverBackup.File }
}
}
$server.Disconnect()


c. Bacup_SSAS_main.ps1

[string]$backupTS = Get-Date -Format “yyyyMMddHHmm”
powershell -executionpolicy bypass -file C:\Scripts\Backup_SSAS.ps1 &gt; C:\Scripts\Backup_SSAS_Logs\Bacup_SSAS_$backupTS.log


2. Create backup log directory

: C:\Scripts\Backup_SSAS_Logs\


3. Create a job in SQL Server database

Type : Operating system(CmdExec)

Run as : SQL Server Agent Service Account

powershell -executionpolicy bypass -file C:\Scripts\Backup_SSAS_main.ps1


4. Backup log files look like this;

———————-
Server : ABIP01
Database: EnterpriseDW
DB State: Processed
DB Size : 32MB
———————-
\\backup\dbbackup\SQL_Server\SSAS\EnterpriseDW\EnterpriseDW_201508042052.abf
Successfully backed up EnterpriseDWP to \\backup\dbbackup\SQL_Server\SSAS\EnterpriseDW\EnterpriseDW_201508042052.abf
———————-
Server : ABIP01
Database: TrackingDW
DB State: Processed
DB Size : 14MB
———————-
\\backup\dbbackup\SQL_Server\SSAS\TrackingDW\TrackingDW_201508042052.abf
Successfully backed up ETL to \\backup\dbbackup\SQL_Server\SSAS\TrackingDW\TrackingDW_201508042052.abf


반응형
Posted by Steve Lim
,