Monday, July 28, 2014

Why Put Your Database into Source Control?

https://www.simple-talk.com/sql/database-administration/why-put-your-database-into-source-control/?utm_source=ssc&utm_medium=publink&utm_content=whydbsc

Database delivery patterns & practices
STAGE 1: VERSION CONTROL
Checking program code into source control is a daily ritual for most developers, but versioning database code is less well-understood. Grant Fritchey argues that getting your databases under source control is not only vital for the stability of development and deployment, but it will make your life easier when something does go wrong.
Software development is a difficult and demanding discipline. It gets even more difficult when you bring teams of developers together on a single project. One of the fundamental issues arises around the code itself. Who is responsible for what piece of it? What changes have they made? How do you get those changes from one member of the team to the next? How do you keep a history of each change, in case it causes problems later that you need to track down and fix? The answer to all these questions, and many more, is to manage your code through a source control system.
Source control systems, also called version control systems (VCS) or revision control systems, date back to the birth of modern computing. One of the first be developed was in 1975 when there was finally enough disk space to store a second copy of the program, just in case. Since then, getting application code into source control has gone beyond being an industry standard practice to simply a part of programming, like writing a function or an IF clause. Yes, there are some shops or individual developers who don't use source control for their code, but those are the glaring exceptions that prove the almost universal rule that code goes into source control.
But databases are different.
The history of databases diverges from the history of code. At some point developers were no longer responsible for databases. Instead they moved into the hands of system administrators or dedicated database administrators, who, frankly, looked at the world quite a bit differently than developers. They spent time worrying about backups, availability, integrity and performance.
In many cases, database development work moved into the realm of the DBA. While many DBAs came from the ranks of developers, they spent more time worrying about all those administration tasks than the development tasks, and some of the best practices and methods created for managing code just weren't applied to databases. But they should be.

Backups

DBAs are very good at putting in place backup schemes that will protect the production data. However, when working with the code of a database, and the SQL that defines data structures and stored procedures is nothing but code, a full database backup is an unwieldy device by which to maintain copies of the schema, for retrieval of changes and historical tracking. For example, to find out what changed between the previous and current versions of a stored procedure a DBA would be forced to use a third party tool that could directly compare to a backup, or to run a full restore of the database to a secondary location, and then extract the stored procedure definition. This isn't always possible, it's frequently impractical and it's certainly going to be slow.
Getting a database into source control provides a much more efficient mechanism for backing up the SQL code for your database. Retrieving a previous version of a stored procedure from entails simply inspecting the history of changes within your VCS. Retrieval is nearly instantaneous.
Once you realize that your SQL is code, it immediately makes sense to use the same backup mechanisms that code uses, which is a VCS.

Auditing

Within most database management systems, it's possible to find out when an object was created or last modified, and which login performed that action. However, there is usually no historical record of any previous modifications to that object. Further, depending on the security mechanism within the database, you may simply see that a system administrator or database owner made the change, with no indication as to the actual identity of the person working within that role.
If you have your database in a VCS, and use that VCS as a fundamental part of your development and deployment mechanisms, then it will provide exactly that type of tracking. All changes originate in the VCS and are not made directly against the production system outside the process around your VCS. You'll know who made what change and when it was made.
Many organizations have to comply with legal requirements for change auditing, such as those mandated by Sarbanes-Oxley. Implementing a VCS could be the quickest and easiest way to provide the required level of historical tracking of all changes so that for every change to the database you know who did it and when.

Integration

As soon as we enter a new code file into the VCS, it assigns it a version. Each time we commit a change to that file, the version increments, and we have access to the current version and all previous versions of the file. When we put a database into the VCS, this means that every database object (table, view stored procedure and so on) in the VCS has a version number. We can also create labels, or tags, that allow us to assign meaningful "build number" to the set of files that comprise a particular version of a database.
Furthermore, having the database in source control directly alongside the application will integrate the database changes with the application code changes, so that you'll always know that the version of the database being deployed directly corresponds to the version of the application being deployed. This direct integration helps to ensure better coordination between teams and it can help when troubleshooting issues.

Automating Deployments

If all changes needed for a production system are in a development database somewhere instead of inside a VCS, deployments are necessarily going to be a manual affair. You will need a process that will generate changes from your development database in order to make the production database mirror the newer design. There are third party products that can help, but how do you differentiate between objects that are meant to go out with one version and objects that are meant to go out with a different version of your code? From within a development database this is frequently impossible since there is no clear and easy methods for differentiating object changes within that database.
Once you start generating your deployment scripts from source control, a number of opportunities open up. You'll be able to differentiate the database objects into known versions which will allow you to control what is getting deployed. Once you can control what is being deployed, you can bring automation to bear on the deployment process. You'll be able to take advantage of continuous integration and other automated deployment and testing mechanisms that application code already uses. Automated deployments also means more testing and validation of those deployments which can help to ensure the final deployment to production is successful.

Conclusion

Since the SQL that defines a database is code, it just makes sense to take advantage of the existing and long-established mechanisms for managing that code. You'll have a better way to backup that code, and, more importantly, retrieve previous versions of that code. Source control for the database provides an audit trail to help with troubleshooting and legal compliance. You'll get better integration with your application code through shared source management. Finally you'll be able to automate your deployments. All these reasons makes putting databases into source control a smart move to help improve management of systems within your organization.
If you'd like to learn more about database source control, Red Gate has put together a free whitepaper "5 Common Barriers to Database Source Control, and How You Can Get Around Them".

Download the whitepaper


This article is part of our database delivery patterns & practices series on Simple Talk.
Find more articles for version control, automated testing, continuous integration & deployment.

Restoring the SQL Server Master Database Even Without a Backup

http://www.mssqltips.com/sqlservertip/3266/restoring-the-sql-server-master-database-even-without-a-backup/

Problem
While researching this article I was surprised by the number of DBAs who back up their user databases, but not their system databases. They either do not fully understand the importance of these databases or have been lulled into complacency by years of smooth sailing. Whatever your reason may be, I'm going to show you how to save yourself (mostly) should you ever find yourself with no viable master database and no good backup.
Solution
There are several reasons you may need to recover your master database. You may have removed something critical and you want it back- a login, linked server or some other system object. This may be the easiest scenario because you still have a master database to start from. Your master database may have been damaged due to hardware or software failure and rendered unusable. You may be restoring to a brand new server or creating a clone of your instance.
I'll cover three scenarios in this tip.
  • First, you have a master database that is viable and you have a backup, all you want to do is a little "time travel" to get back to a known state.
  • Second, your master database is gone or unusable, but you have a backup from which to restore - thank your lucky stars.
  • Last, your master database is kaput and you have no backup to use for recovery. Don't get out your resume just yet, we can get your instance back up and running, but there will be a lot of work to do after that to return to some semblance of what it once was.
Warning! If your master database is all that's broken STOP NOW and make copies of your msdb and model database files! Some of the instructions below will overwrite them destructively. Let's not add insult to injury by whacking stuff that is still good.

Scenario 1: Restoring an existing SQL Server master database from backup

Restoring master is tricky because it contains the information about all the other databases in the instance and is required for startup. In the case where you have a "good" master database to start from you have to start in single-user mode using the -m flag. In SQL Server Configuration Manager, right-click on the SQL Server service, select properties, then the startup parameters tab. Enter -m in the top box and click add, then apply, then restart SQL Server.
Single User Setup Screenshot
Now, with SQL Server in single user mode, using the command line (SQL Server Management studio won't run without in single user mode!) you can recover your master from backup. Find sqlcmd.exe in your .../Tools/Binn directory and run it. You will need to use the -S flag if you have a named instance and the -U and -P flags if you're not using a trusted connection (see BOL for more information). From there you can restore from your backup just like any user database.
Restore Master Database Screenshot
That's it, now remove the -m from the SQL Server service startup options and restart the SQL server service in multi-user mode! Detailed master database restore instructions are on MSDN HERE and there is a very good series of tips on MSSQLTips.com HERE to help you prepare for a master database rebuild.

Scenario 2: Rebuilding then restoring the SQL Server master database from backup

If you don't have a viable master database you need to create one first, if your master database is there and you just want to recover from backup use scenario one. There are two ways you can rebuild your master database. For 2005, 2008 and 2008R2 you must use setup to recreate ALL of your system databases at once, for 2008R2 and later you can use the template option to overwrite just the master database.

Using Setup to Recreate System Databases

From the bootstrap directory- "C:\Program Files\Microsoft SQL Server\<nnn>\Setup Bootstrap\<release>" (replace <nnn> with the right version and <release> with the right release) run the following command substituting your instance name, a windows account to have administrative rights and a secure password for SA where indicated. This is where you want to make sure you have copies of your good model and msdb databases saved off somewhere safe!:
.\setup /ACTION=REBUILDDATABASE /INSTANCENAME=<instance_name> /SQLSYSADMINACCOUNTS=<admin_account> /SAPWD=<sa_password>
    


For Example:
Rebuild database screenshot
Now that you have a master database to work with you can recover it from a backup if you have one. At this time you should also shut down the services and replace the blank msdb and model database files with the copies you saved off previously. Jump to scenario three to find out how to recover at least some of your master database's data if you're not recovering from a backup.

Using the Template Master Database

For 2008 R2 and 2012 the system database recovery function works differently. Rather than using T-SQL scripts there are template databases created at install time, which are copied over the database and transaction log files of your system databases by the setup.exe /REBUILDDATABASE option. If your msdb and model databases are okay then it may be simpler to manually copy the template files of the master database to where they belong rather than rebuild all three system databases and restore the msdb and model from backup. The template files are found in the "C:\Program Files\Microsoft SQL Server\MSSQL<version>.<instance_name>\MSSQL\Binn\Templates" directory depending on your instance name and version.
If you have a good backup of your master database you can now restore it using the instructions from scenario one. If you don't have a backup then keep reading to see how you may be able to get back some of the data you're missing. Start your instance and follow along as we try to get stuff put back together.

Scenario 3: Reconstructing un-backed up data from the SQL Server Master Database

First, if you skipped the first sections of this tip, follow the instructions above to re-create your master database making sure to keep good copies of your other system databases in case they get overwritten by the rebuild.
At this point you should have an instance up and running, if you have master database backups and have restored then congratulations, you may breathe a sigh of relief and go on to the next tip. If you had no backups then we still have some work to do.

Reconnecting the databases

Connect to your server using SSMS and the Admin account you used when it was installed or that you used on the rebuild step above. The first thing you will notice is that when you expand the databases tree it is empty. This is because master contains all the database location information and its now gone. But the databases are still out there, we just need to tell SQL Server where they are. If you have backups of these databases the easiest solution is to use them now, if you don't then this will take some legwork as you will need to record the location of all the database and transaction log files for each database and attach them manually using the either the following T-SQL code with the right database name and file names replaced or SQL Server Management Studio's attach database command from the database menu:
USE [master]
GO
CREATE DATABASE [UserDatabaseName] ON
  (FILENAME = N'C:\sqldata\data.mdf'),
  (FILENAME = N'D:\sqltlog\tlog.ldf')
FOR ATTACH
GO
    
Right click the databases tab and select "Attach …" then click "Add" on the Attach Database dialog. By selecting the mdf file for each database it will find the .ndf and .ldf files (if they have not been moved).
Attach database screenshot
Note that you may need to change the ownership of the databases, they will be owned by the account that re-attached them. You will need to recreate the logins first!
alter authorization on database::[database_name] to  "owner_name"
    

Restoring other system objects

There are several other object types that exist in the master database that will be lost when you rebuild or restore from template and don't have a backup; logins, endpoints, and linked databases to name a few. This is where the rest of the restore gets either really tedious or impossible, but look on the bright side - you'll be ridding yourself of all that baggage that has been accumulating for years. If similar objects exist in other instances (your QA or development environments, for example) you can use SSMS to make create scripts to edit and deploy on the restored server. Even if the objects are not an exact match, the scripts will be close enough to save you a lot of needless typing.
Next Steps

  • First and foremost, without further delay, create and implement a backup plan for your system databases! It's as simple as setting up a maintenance plan. Once you've got that setup, take a look at your backup plans in their entirety (you do have them, right?) and review them with this experience fresh in your mind.
    • Are your system databases backed up frequently enough to meet your recovery point objectives?
    • Are you user database backups actually running and will they met your RPO?
    • Where are the holes, if any, that you want to fill and what will it take to fill them?
    • If you're not backing up due to budget constraints you need to make a business case for a tape drive or external drive of some sort, they are far cheaper than lost business due to a prolonged outage or worse- lost data.
  • Backup and recovery are a core skill of any DBA (on any platform) make sure you have backups and make sure you regularly test not only your backup media, but your processes and procedures as well. Crunch time with your CIO looking over your shoulder is not the time to verify that you recovery procedure it 100% up-to-date!

Wednesday, July 9, 2014

Creating Memory-Optimized Table in SQL Server 2014

http://www.sqlservercentral.com/articles/memory-optimized+table/110748/

SQL Server 2014 was released on April 1st. One of the most exciting features is its new In-Memory technology. Today I’m going to give you a quick introduction about memory-optimized table in SQL Server 2014 and its limitations.

Creating a memory-optimized table

First of all, let’s create a test database – DB1:
USE master;
GO
CREATE DATABASE DB1;
GO
The next step would be creating a required file group for memory-optimized tables. To do that, we need to add a memory-optimized file group to DB1:
ALTER DATABASE DB1 ADD FILEGROUP IMOLTP CONTAINS MEMORY_OPTIMIZED_DATA;
GO
Notice the key word “MEMORY_OPITIMIZED_DATA”, it tells SQL Server this file group is in memory and will store memory-optimized objects.
Now let’s add a file into this file group:
ALTER DATABASE DB1 ADD FILE (NAME='IMOLTP',FILENAME='C:\DB\IMOLTP') TO FILEGROUP IMOLTP;
GO
This statement is similar to the statement that adds a Filestream file to SQL Server. The filename specified here is a folder name instead of a filename. The difference between these two is that adding memory-optimized files don’t need to turn on Filestream feature in SQL Server. In the meantime, it’s possible to add multiple memory-optimized data files for greater performance.
Once the memory-optimized data file is ready, we can go ahead and create our first memory-optimized table:
USE DB1;
GO
CREATE TABLE T1_OLTP
( ID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000)
, MSG CHAR(8000)) WITH (MEMORY_OPTIMIZED = ON)
GO
Notice that 2 places in the previous statement are different from the traditional version:
  1. “MEMORY_OPTIMIZED = ON” at the end of the table definition
  2. Nonclustered hash index in the table definition
By having “MEMORY_OPTIMIZED = ON”, we are telling SQL Server that this is a memory-optimized table. We don’t need tell the memory-optimized file group name, SQL Server will decide where this table will be stored.
Unlike traditional disk-based tables, memory-optimized tables support two types of indexes: non-clustered hash indexes and non-clustered indexes. To be able to create a memory-optimized table, at least one index needs to be created. We have created a hash non-clustered index in our demo table. The bucket count of the hash index must be specified, the value of the bucket count is recommended to be greater than the number of unique keys.
A primary key is also required because memory-optimized tables don’t support heap tables.
In SQL Server 2014, memory-optimized tables can be durable or non-durable. Durable memory-optimized tables are like disk-based tables, data is persisted on disk. Non-durable tables, on the other side, will only keep table schema. Data stored in non-durable tables is persisted on disk and will be lost after server restart. Non-durable tables can be useful for temporary tables or staging tables which data persistence is not required. By default, memory-optimized tables are durable.
To create a durable memory-optimized table, use DURABILITY = SCHEMA_AND_DATA :
CREATE TABLE T2_OLTP
(
  ID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000)
, MSG CHAR(8000)
)
WITH (MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA)
GO
Or use DURABILITY = SCHEMA_ONLY for a non-durable table:
CREATE TABLE T3_OLTP
(
  ID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000)
, MSG CHAR(8000)
)
WITH (MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY)
GO
In our T1_OLTP table, DURABILITY is not mentioned and SQL Server will create a durable table using its default setting - SCHEMA_AND_DATA.
Now the demo memory-optimized table is ready, let’s do some simple CRUD (Create, Read, Update and Delete) operations:
INSERT INTO T1_OLTP VALUES (1,REPLICATE('A',8000))
GO
SELECT * FROM T1_OLTP
GO
UPDATE T1_OLTP SET MSG =REPLICATE('B',8000)
GO
SELECT * FROM T1_OLTP
GO
DELETE FROM T1_OLTP
GO
SELECT * FROM T1_OLTP
GO
As you can see, inserting/updating/deleting data using memory-optimized table are same as we do with disk-based tables, just faster in some cases.
Limitations
The Memory-Optimized table is a new feature in SQL Server 2014 and it comes with limitations. I have listed some limitations below:
Unsupported feature
Workaround
Replication
Drop replication
Computed column
Use regular column and include computed column value with insert/update in stored procedure
Trigger
Customize inclusive stored procedure to replace native insert/update/delete
Foreign Key
Customize inclusive stored procedure to replace native insert/update/delete
Check Constraint
Customize inclusive stored procedure to replace native insert/update/delete
Unique Constraint
Customize inclusive stored procedure to replace native insert/update/delete
Alter table
Drop and recreate
Create Index
All indexes need to be created in Create Table
Merge (as Target)
Use Insert/Update/Delete
Nullable column
Change to Non-Nullable column and change application logic accordingly
Identity*
Use Sequence instead
*   Identity(1,1) is supported
For a complete list of unsupported features, visit BOL.
In the meantime, some data types are also unsupported:
  • datetimeoffset
  • geography
  • geometry
  • hierarchyid
  • varchar(max)
  • nvarchar(max)
  • varbinary(max)
  • xml
  • text
  • ntext
  • sql_variant
  • rowversion
  • user-defined types.
More details can be found on BOL.
You might have seen some table definitions like the one below in SQL Server 2008 or up:

CREATE TABLE T1
  (
    id INT
  , msg VARCHAR(5000)
  , msg2 VARCHAR(5000)
  )
It’s acceptable for a table to have a total row size of variable length columns greater than 8K. In SQL Server 2014, this is still valid for disk-based tables, but memory-optimized table will not accept it. If we run

CREATE TABLE T2_OLTP
  (
    ID INT NOT NULL
           PRIMARY KEY NONCLUSTERED HASH WITH ( BUCKET_COUNT = 1000000 )
  , MSG VARCHAR(5000)
  , MSG2 VARCHAR(5000)
  )
  WITH (
       MEMORY_OPTIMIZED =
       ON)
GO
The following error will be raised:

In this case, we will have no choice but to redefine the table schema.
Although memory-optimized tables seem to have many limitations, it is very possible Microsoft will enhance its In-Memory technology and remove many of the constraints in the future releases.
Summary
In this article, we discussed how to create memory-optimized tables, the limitations and workarounds. Hopefully this will help you in implementing memory-optimized tables in your future projects

Dynamic Management Views: Changes in SQL Server 2014

http://www.sqlservercentral.com/articles/Dynamic+Management+View/110788/


Introduction

Dynamic management views are immensely useful tools that are provided with all editions of SQL Server.  Using them, you can quickly find out information about many aspects of SQL Server, everything from index usage, query cache contents, server process data, wait stats, and much, much more!
The addition of In-Memory OLTP brings with it a host of new DMVs that track many parts of this new feature, but there are a few other new views and changes that areworth writing home about.  Attached is a spreadsheet that lists all changes I could identify between SQL Server 2012 and 2014.  If you are upgrading, these changes could be invaluable towards monitoring new features, or taking into account changes such that your existing monitoring doesn’t break after upgrading.

The Details

As a reference, a full list of DMVs can be returned by running the following query:
SELECT
       *
FROM sys.system_objects
WHERE name LIKE 'dm_%'
AND type = 'V'
ORDER BY name
This information can be used to verify the existence and contents of any DMV in any version.  Also note that Microsoft occasionally implements changes in service packs.  A handful of new DMVs were quietly added in SQL Server 2008R2 SP1, and so it’s worth keeping in mind that changes don’t only occur when a major version comes out.
What follows is a list of what I consider to be the most significant or interesting changes in SQL Server 2014.  We’ll cover DMVs related to In-Memory OLTP last as they constitute the huge majority of recent DMV additions.
sys.dm_db_log_space_usage
It’s a minor change, but the log_space_in_bytes_since_last_backup column was added to the view.  This could be handy for keeping track of log usage prior to, or in between backup cycles:

Note the size is in bytes, so some additional conversion will be required if you’d like that number in a more digestible units, such as KB or MB.
sys.dm_hadr_database_replica_states
Another single column addition, this one inserted into the 6th position.  If you use AlwaysOn, then you now can verify if a replica is the primary replica using the column is_primary_replica.
sys.dm_io_cluster_valid_path_names
This is a brand new view in SQL Server 2014 that returns data on all valid shared disks for servers using failover clustering.  If an instance is not clustered, then no rows are returned.  This view provides the path_name, which is the root directory path for database and log files, cluster_owner_node, the current owner of the drive, which is the node that hosts the metadata server, and is_cluster_shared_volume, which tells you if this is a cluster shared volume or not.
sys.dm_os_buffer_pool_extension_configuration
This new view provides configuration information about the new buffer pool extension functionality in SQL Server 2014.  It will return one row per buffer pool extension file.  If this feature is disabled, a single row will be returned with default metadata that will look like this:
The buffer pool extension allows you to expand your buffer pool onto SSDs so that you can cheaply increase performance, especially in read-heavy OLTP environments.  It can be configured and used without any application or databases changes, allowing it to be implemented quickly.  Lastly, since it only operates on clean pages, there is no chance of data loss.  We can look forward to an in-depth article in the future that will dive into this feature and thoroughly analyze its effects on SQL Server.
sys.dm_resource_governor_resource_pool_volumes
sys.dm_resource_governor_resource_pools
sys.dm_resource_governor_configuration
For users of the Resource Governor, a new view, as well as some changes to existing views have been made in order to account for the ability in SQL Server 2014 to govern IO.   There is no GUI available for modifying IO settings within the Resource Governor.  In order to change the max or min IOPS per volume, TSQL will need to be used to create a new resource pool or alter an existing one.  Here’s an example of a basic test resource pool being created:
CREATE RESOURCE POOL [test_resource_pool]
  WITH(MIN_CPU_PERCENT=0,
              MAX_CPU_PERCENT=33,
              MIN_MEMORY_PERCENT=0,
              MAX_MEMORY_PERCENT=33,
              CAP_CPU_PERCENT=100,
              AFFINITY SCHEDULER = AUTO,
              MIN_IOPS_PER_VOLUME=0,
              MAX_IOPS_PER_VOLUME=1000)
To dm_resource_governor_configuration, a single new column was added,max_outstanding_io_per_volume, which is self-explanatory.  Todm_resource_governor_resource_pools, 18 new columns were added, increasing visibility into usage within this feature.  These new IO-related columns are primarily cumulative data, as well as min and max usage values.  For example, the new column read_io_completed_total provides the total read IOs since the Resource Governor statistics were last reset.  Here is a subset of some of the new columns that have been added to this view:

The new Resource Governor view, dm_resouce_governor_resource_pool_volumes, appears to contain overlap of most of its columns withdm_resouce_governor_resource_pools, but the breakdown is slight different: the former divides this data by disk volumes in use by the instance and the latter strictly by resource pools (which can be identical depending on configuration).  The new data is strictly related to IO on each volume and will only be populated when at least one user connection is made via the Resource Governor.  Until that point, it will remain empty.

In-Memory OLTP

By far, the biggest feature addition in SQL Server 2014 is the addition of memory-optimized tables and native compiled stored procedures.  Along with this feature are a total of 11 database-specific DMVs and 6 instance-level DMVs.  In order to adequately illustrate their usage, we will walk through the creation of some in-memory objects and show how the results are reflected in our DMVs.
Our example here will be a setup of 3 tables that describe ingredients, meals, and a one-to-many relationship that illustrates what a meal will contain.  I’ll include the steps to create a new database and configure it for memory-optimized tables:
CREATE DATABASE restaurant
GO

-- Create a filegroup for memory-optimized data and add a file
ALTER DATABASE restaurant ADD FILEGROUP restaurant_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE restaurant ADD FILE (name='restaurant_mod1', filename='c:\Data\restaurant_mod1') TO FILEGROUP restaurant_mod

USE restaurant
GO

-- This is a table of ingredients that will be referenced by other tables
CREATE TABLE dbo.ingredients (
       ingredient_id INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=50000),
       ingredient_name VARCHAR(50) NOT NULL,
       cost DECIMAL(10,2) NOT NULL,
       color VARCHAR(25) NULL
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Create a table that will contain meals
CREATE TABLE dbo.meals (
       meal_id INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 20000),
       meal_name VARCHAR(50) NOT NULL,
       cuisine VARCHAR(25) NOT NULL,
       meal_description VARCHAR(100) NOT NULL,
       recipe_description VARCHAR(2000) NOT NULL,
       spice_level TINYINT NULL
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- This table relates meals to ingredients as recipes
CREATE TABLE dbo.recipe (
       recipe_id INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
       meal_id INT NOT NULL,
       ingredient_id INT NOT NULL,
       INDEX ix_recipe_meal_id NONCLUSTERED HASH(meal_id) WITH (BUCKET_COUNT = 100000),
       INDEX ix_recipe_ingredient_id NONCLUSTERED HASH(ingredient_id) WITH (BUCKET_COUNT = 100000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

-- Insert some data for our restaurant:
INSERT INTO dbo.meals VALUES (1, 'Chicken Taco', 'Mexican', 'A spicy taco with chicken', 'Cook chicken, combine with beans and other ingredients in a taco shell.', 6)
INSERT INTO dbo.meals VALUES (2, 'Meatloaf', 'American', 'A hearty loaf of beef and spices', 'Combine all ingredients and bake at 350 for 90 minutes.', 2)
INSERT INTO dbo.meals VALUES (3, 'Lemonade', 'American', 'Refreshing lemonade for a cold day', 'Combine all ingredients and chill for an hour before serving.', 0)
INSERT INTO dbo.ingredients VALUES (1, 'Chicken', '3.99', 'White')
INSERT INTO dbo.ingredients VALUES (2, 'Taco Shells', '2.00', 'Yellow')
INSERT INTO dbo.ingredients VALUES (3, 'Lettuce', '1.50', 'Green')
INSERT INTO dbo.ingredients VALUES (4, 'Tomato', '1.25', 'Red')
INSERT INTO dbo.ingredients VALUES (5, 'Cilantro', '2.00', 'Green')
INSERT INTO dbo.ingredients VALUES (6, 'Black Beans', '0.50', 'Black')
INSERT INTO dbo.ingredients VALUES (7, 'Beef', '3.00', 'Brown')
INSERT INTO dbo.ingredients VALUES (8, 'Bread Crumbs', '1.50', 'Yellow')
INSERT INTO dbo.ingredients VALUES (9, 'Parsley', '1.75', 'Green')
INSERT INTO dbo.ingredients VALUES (10, 'Tomato Sauce', '0.60', 'Red')
INSERT INTO dbo.ingredients VALUES (11, 'Lemons', '0.75', 'Yellow')
INSERT INTO dbo.ingredients VALUES (12, 'Sugar', '2.50', 'White')
INSERT INTO dbo.ingredients VALUES (13, 'Water', '0.10', 'Clear')
INSERT INTO dbo.recipe VALUES (1, 1)
INSERT INTO dbo.recipe VALUES (1, 2)
INSERT INTO dbo.recipe VALUES (1, 3)
INSERT INTO dbo.recipe VALUES (1, 4)
INSERT INTO dbo.recipe VALUES (1, 5)
INSERT INTO dbo.recipe VALUES (1, 6)
INSERT INTO dbo.recipe VALUES (2, 7)
INSERT INTO dbo.recipe VALUES (2, 8)
INSERT INTO dbo.recipe VALUES (2, 9)
INSERT INTO dbo.recipe VALUES (2, 10)
INSERT INTO dbo.recipe VALUES (3, 11)
INSERT INTO dbo.recipe VALUES (3, 12)
INSERT INTO dbo.recipe VALUES (3, 13)
GO

-- Update statistics before continuing
UPDATE STATISTICS dbo.meals WITH FULLSCAN, NORECOMPUTE
UPDATE STATISTICS dbo.ingredients WITH FULLSCAN, NORECOMPUTE
UPDATE STATISTICS dbo.recipe WITH FULLSCAN, NORECOMPUTE
GO

-- Create a natively compiled stored procedure for returning all ingredients for a particular meal.
CREATE PROCEDURE dbo.ingredient_list @meal_id int
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
       SELECT
              ingredients.ingredient_name
       FROM dbo.meals
       INNER JOIN dbo.recipe
       ON meals.meal_id = recipe.meal_id
       INNER JOIN dbo.ingredients
       ON ingredients.ingredient_id = recipe.ingredient_id
       WHERE meals.meal_id = @meal_id
END
GO

-- Run our natively compiled stored proc for an arbitrary meal

EXEC dbo.ingredient_list 1
GO
Now that we have some in-memory schema to play with, we can take a look at some of our new DMVs and how they work. 
sys.dm_db_xtp_object_stats
Basic object usage data is provided by this DMV.  A positive object_id indicates user objects, whereas a negative object_id indicates system objects.  System tables are used to store run-time information for the in-memory engine.  Since they are solely in memory, they are not referenced in system catalogs as other objects are.  We can join this to sys.all_objects in order to get more info on our schema.
SELECT
       dm_db_xtp_object_stats.*,
       all_objects.name
FROM sys.dm_db_xtp_object_stats
INNER JOIN sys.all_objects
ON all_objects.object_id = dm_db_xtp_object_stats.object_id
WHERE all_objects.object_id > 0 -- Explicitly filter out system objects

This query returns all user objects, so we can see our 3 memory-optimized tables and some stats on them.  Note that this data is maintained as of SQL Servers last restart.
sys.dm_db_xtp_index_stats
This index provides additional index statistics for all indexes on memory-optimized tables.  Similar to the previous view, we can get this data along with the index name using a join to sys.indexes:
SELECT
       name,
       type_desc,
       dm_db_xtp_index_stats.*
FROM sys.dm_db_xtp_index_stats
INNER JOIN sys.indexes
ON indexes.index_id = dm_db_xtp_index_stats.index_id
AND indexes.object_id = dm_db_xtp_index_stats.object_id
This query returns our 2 nonclustered indexes along with the default primary key indexes on each table we created:

Note that this data is maintained as of SQL Servers last restart.
sys.dm_db_xtp_table_memory_stats
This view provides memory usage statistics for all in-memory tables, including system tables (which will have a negative object_id if displayed).  A join tosys.all_objects provides additional information as-needed:
SELECT
       all_objects.name,
       dm_db_xtp_table_memory_stats.*
FROM sys.dm_db_xtp_table_memory_stats
LEFT JOIN sys.all_objects
ON all_objects.object_id = dm_db_xtp_table_memory_stats.object_id
The output of this query will look like this:

A sum of any of the columns in this DMV (aside from object_id, of course) will give you the total memory usage/allocation for all in-memory objects.  System objects were kept here so that we don’t inadvertently filter them out, in the event that we do want to aggregate or report on this data.
sys.dm_db_xtp_memory_consumers
This view returns all memory consumers for the in-memory OLTP engine.  This data includes indexes, tables, and a number of system objects.  The following query will return the data from this view, along with the name of the table or index it relates to, if applicable:
SELECT
       all_objects.name AS name,
       indexes.name AS index_name,
       dm_db_xtp_memory_consumers.*
FROM sys.dm_db_xtp_memory_consumers
LEFT JOIN sys.all_objects
ON all_objects.object_id = dm_db_xtp_memory_consumers.object_id
LEFT JOIN sys.indexes
ON indexes.index_id = dm_db_xtp_memory_consumers.index_id
AND indexes.object_id = dm_db_xtp_memory_consumers.object_id
Note that any rows with a NULL object_id and a memory_consumer_type_desc of VARHEAP are memory that is set aside for use by variable length columns.  There will also be a single row per database for PGPOOL, which tracks memory consumption for the database page pool (used for table variables, some serialized scans, and other runtime operations).
sys.dm_db_xtp_hash_index_stats
This is an important view for verifying the performance of any hash indexes you have created on an in-memory table.  The following query will return info on your hash indexes, the parent object, and statistics on bucket usage:
SELECT
       object_name(dm_db_xtp_hash_index_stats.object_id) AS 'object name',
       indexes.name,
       floor((cast(empty_bucket_count as float)/total_bucket_count) * 100) AS 'empty_bucket_percent',
       dm_db_xtp_hash_index_stats.total_bucket_count,
       dm_db_xtp_hash_index_stats.empty_bucket_count,
       dm_db_xtp_hash_index_stats.avg_chain_length,
       dm_db_xtp_hash_index_stats.max_chain_length
FROM sys.dm_db_xtp_hash_index_stats
INNER JOIN sys.indexes
ON indexes.index_id = dm_db_xtp_hash_index_stats.index_id
AND indexes.object_id = dm_db_xtp_hash_index_stats.object_id
A hash index is structured differently from a standard nonclustered index in that you create a set number of buckets which will be populated automatically via SQL Server’s hash function.  Hash indexes will only work on equality predicates (=) and do not support sorting on the index (as the data is not stored in order).  Hash indexes work best for key columns that are unique, as the more duplicate values exist, the more SQL Server needs to work in order to sort through a bucket of duplicate keys (all duplicate key values will always be stored in the same bucket).
In monitoring the performance of a hash index, we primarily are interested in the empty bucket count and the average chain length.  Running the query above, we can look analyze this information and determine if our indexes are effectively being used:

Microsoft recommends that the total_bucket_count be about 2x-4x the number of unique key values.  Too many buckets will result in wasted memory whereas too few buckets will result in slower searches when a bucket contains many key values.  A table scan will scan ALL buckets, including empty buckets.  As a result, an excessively large empty bucket count will negatively impact table scans.  As an aside to the data above, the number of buckets in a hash index will always be rounded up to the next power of 2 from the value you provide when the table is created.
The empty_bucket_percent is an indicator of how much space is left in the index before buckets need to start being reused for key values.  Having about 1/3 of the buckets empty allows for a safe level of growth.  dm_db_xtp_hash_index_stats can be used to monitor your hash indexes en masses to determine if any need to be recreated with a larger bucket count.
The avg_chain_length tells us, on average, how many keys are chained in each bucket.  Ideally, this value would be 1, or at least close to 1.  A high value for this stat can indicate that there are many duplicate keys in your index.  If this is the case, it may be worth considering using a standard nonclustered index instead.  A high avg_chain_length can also indicate that the bucket count is too low, forcing buckets to be shared.  Recreating the index with a higher bucket count will resolve this problem.
Our example above is far too small to be able to analyze with any intention to optimize it, but it illustrates the layout of this DMV and how the data will look when returned.  Hash indexes are extremely efficient for equality searches and scans and this new view will allow you to make the best decisions with regards to their creation, usage, and maintenance.
sys.dm_db_xtp_nonclustered_index_stats
This view tracks index usage statistics on all nonclustered indexes.  This does not include hash indexes, though, which are included in dm_db_xtp_hash_index_stats.  In order to illustrate its usage, we will create an additional table with a standard nonclustered index, and then analyze the data in the view:
CREATE TABLE dbo.reviews (
       review_id INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000),
       review_title VARCHAR(50) NOT NULL,
       review_content VARCHAR(2000) NULL,
       meal_id INT NOT NULL INDEX ix_review_meal_id NONCLUSTERED
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
INSERT INTO dbo.reviews VALUES ('Delicious tacos!', 'This recipe made very flavorful tacos.  I''d add jalapeno peppers next time.', 1)
INSERT INTO dbo.reviews VALUES ('This meatloaf is OK...', '', 2)
INSERT INTO dbo.reviews VALUES ('Yikes!', 'Needs...more...sugar...badly...', 3)
GO
Now, we can run a query on this view, referencing sys.indexes in order to grab the index name:
SELECT
       object_name(dm_db_xtp_nonclustered_index_stats.object_id) AS 'object name',
       indexes.name,
       dm_db_xtp_nonclustered_index_stats.*
FROM sys.dm_db_xtp_nonclustered_index_stats
INNER JOIN sys.indexes
ON indexes.index_id = dm_db_xtp_nonclustered_index_stats.index_id
AND indexes.object_id = dm_db_xtp_nonclustered_index_stats.object_id
Running this query returns data that will look similar to this:

Using this large amount of information, we can quickly see how large our index is, as well as how often it is being written and read, similar to existing index DMVs that track standard indexes for non-memory-optimized tables.  6 columns in this view reference retry counts and are worth keeping an eye on.  A high number of retries signifies a concurrency issue as SQL Server is unable to perform a specific action on the index.  Strangely enough, Microsoft’s recommendation for handling this issue is to give their Support staff a call:
sys.dm_xtp_gc_stats
sys.dm_xtp_gc_queue_stats
sys.dm_db_xtp_gc_cycle_stats
SQL Server 2014 utilizes a system garbage collector that will clean up old versions of rows in memory.  When a row is deleted or updated (updates are essentially a delete & insert), the old version that was deleted becomes garbage.  An End Timestamp is populated in the row header, indicating when the row was deleted.  It is up to the garbage collection process (which is run by both system and user processes) to free up the memory so it can be reused.  For the sake of this article, I’ll keep the DMV descriptions brief, but more info can be found on the specific columns for each of these DMVs on MSDN.
dm_xtp_gc_stats provides general statistics on the garbage-collection process.  This data applies to all In-Memory OLTP data across the entire SQL instance, and is not limited to the current database.
dm_xtp_gc_queue_stats will return a row for each logical CPU on a database server.  Each row represents a garbage collection worker queue and the current & total work it has performed.  The key metric to watch out for in this view is if thecurrent_queue_depth is growing and not being processed.  This can be caused by long-running transactions and can be a sign of inefficient SQL.  This view also applies to in-memory garbage collection across an entire SQL server.
dm_db_xtp_gc_cycle_stats manages committed transactions that have deleted rows, causing stale data.  These transactions are moved into queues associated with generations, depending on when they were committed with respect to the last active transaction.  If memory pressure occurs on a server, garbage collection will move through this data in order of the age of transactions (oldest first).
Another 8 views were added in SQL Server 2014 that relate to In-Memory OLTP, but I will not cover in detail here due to lack of space to elaborate on their usage and how their underlying features tie to the DMV.  The remaining views are comprised of 4 database-specific views as well as 4 instance-specific views:
sys.dm_db_xtp_checkpoint_files
sys.dm_db_xtp_checkpoint_stats
sys.dm_db_xtp_merge_requests
sys.dm_db_xtp_transactions
sys.dm_xtp_system_memory_consumers
sys.dm_xtp_threads
sys.dm_xtp_transaction_recent_rows
sys.dm_xtp_transaction_stats

DMV Removal/Deprecation?

Based on all of Microsoft’s documentation, as well as experimentation, no DMVs have been removed in SQL Server 2014, nor have any been deprecated for removal in a future version.  No columns were removed from any views either, though many were added.

Conclusion

To help compare all of the changes in dynamic management views from SQL Server 2012 to SQL Server 2014, I have attached an Excel spreadsheet that contains a row for each DMV in each version of SQL Server.  Rows highlighted yellow were mentioned in this article while the rest were either unchanged or did not receive significant enough change to warrant further discussion.