How to Upgrade In-Place From Microsoft SQL 2012 to Microsoft SQL 2016

I am a fan of good documentation and love it when you can find a full how to regarding the thing you are looking to do.  When it comes to SQL 2016, however, there aren’t a lot of how tos out there yet since SQL Server 2016 only came out in July. I am writing this because there are a couple of warnings and gothchas.  I hope this post will save someone time when trying to do their own upgrade.  One thing I would like to note is this was accomplished on a single server with no high-availability groups, no special setup, no special anything.  Also, the pictures will appear quite large, but I wanted to make sure the details in the graphics are viewable.

Pre-Install

Before you begin the fun process of upgrading the install, there are a few things you will want to do.  Below is my checklist for making sure things didn’t blow up in my face as this was a production server (no servers were hurt in the making of this blog post):

  1. Make sure you have good backups!  This is the number one rule for anything DBA-wise, but for upgrades, make sure you have one, two, ten, versions of the backups.  If you can, test them as well.
  2. Take screen-shots of all your settings.  A normal upgrade of SQL, whether you are going from 2005, 2008, or 2012, is a uninstall and re-install of the application. When you have linked servers or mirrored replicas or simply made changes to the memory settings to limit SQL’s control of the OS, take those screenshots.
  3. If you have Read-Only databases, you will have to change them to Read-Write before you start.  The command is simple:
    1. USE [master]
    2. GO
    3. ALTER DATABASE SET [{name of database}] TO READ_WRITE WITH NO_WAIT
    4. GO
  4. Check to make sure the compatibility mode for a database is not set to low.  The upgrade process can handle 2005 databases, but gets a bit weird with them.  If I remember correctly, you have to upgrade a 2000 database to at least 2005 before you can even upgrade it.
  5. Run the Database Migration Assistant.  It is invaluable in prepping your server for the upgrade.  You can find it here: https://www.microsoft.com/en-us/download/details.aspx?id=48119

Install

When you start the upgrade, make sure your user has administrative rights to the server.  Also, make sure that if you have registered servers, that you know you will need to re-register them after the upgrade. First, open the CD or ISO if you have one and double-click “setup.exe.”

image1

Next, click “Upgrade From a Previous Version of SQL Server”

image2

If you have a product key, enter it here.  Otherwise, specify “free edition” and click Next.

image3

You will have to accept the terms of the Microsoft Agreement.

image4

Click the checkbox to accept the agreement and click Next.

image5

The installation will begin looking for pre-requisites and install them if you do not have them.  On this installation, all the pre-requisites were met.  As of this writing, there were no product updates that could pulled either. Click Next.

image6

Make sure that you select the instance you want upgraded.  On this installation, there was only the default instance. When you are satisfied, click Next.

image7

You are now presented with a list of features to install.  As this is an upgrade, it should all be greyed out.  If everything looks normal, click Next.

image8

Now we configure the instance.  For the default, you shouldn’t have to do anything. However, if you have changes, you would make them here. Click Next when done.

image9

If you are on an Active Directory domain, the SQL Server and SQL Agent should run as domain users with administrative rights to the system. We are not really using SSIS off this server or DTS, so this can be left alone.  However, in your configuration, enter domain users if you are using these services.  Click Next.

image10

If you have Full-Text services running and are actively using them, you should probably leave the default, “Import” selected.  However, if you want to reset the build, select one of the other options.  Honestly, I have never used Full-Text services, so you might want to double-check the documentation on this one. Click Next when ready.

image11

image12

image13

The installation will look over its feature list and give you any warnings or errors before you continue.  On this installation, there was a database at a 2005 compatibility level. Since it was only a test database, I ignored the warning. Moreover, I was curious as to how the upgrade would affect it in the other compatibility level.  Click Next when it finishes.

image14

The installation now produces a summary of your choices.  You can go back or click Upgrade when you are ready.

image15

The upgrade will now proceed.  It ran about fifteen minutes on an my older Intel i5 server with 48GB of RAM.

image16

Once the Upgrade finishes, it will require a restart. Click Ok. This will not restart the machine, only warn you it needs to be done.

image17

image18

The final window will present you with everything the Upgrade accomplished, including any warnings or errors.  If you have errors, you will have to go back, fix the errors, and do a Repair on the installation. Click Close and reboot your machine.

Post-Install

After the reboot, verify that all the services have started:

imager1

Close this window if all the services above are running and any that you might have installed that are different than this installation.  We now need to get Powershell running.  To this, we can re-run the setup application.

imagep1

This time, click “Install SQL Server Management Tools.”  It will open a link to this web page: https://msdn.microsoft.com/en-us/library/mt238290.aspx

That same link will be needed on your workstation to gain all the benefits of the 2016. Download the file and click “Run”

imagessms1

Once the installation starts, click Install.

imagessms2

The installation will proceed on its own.

imagessms3

Click Restart.

Powershell

After you do the upgrade and install SSMS, you encounter the following error:

Restore-SqlDatabase : Cannot bind parameter ‘RelocateFile’. Cannot convert the “Microsoft.SqlServer.Management.Smo.RelocateFile” value of type

“Microsoft.SqlServer.Management.Smo.RelocateFile” to type “Microsoft.SqlServer.Management.Smo.RelocateFile”.

Run the following command to verify that the assemblies are pointing to the old version:

PS SQLSERVER:\> [AppDomain]::CurrentDomain.GetAssemblies().GetTypes() | ? FullName -eq Microsoft.SqlServer.Management.Smo.RelocateFile | select Assembly

Assembly

——–

Microsoft.SqlServer.SmoExtended, Version=13.0.0.0, Culture=neutral, PublicKeyToken=123456789123456

Microsoft.SqlServer.SmoExtended, Version=11.0.0.0, Culture=neutral, PublicKeyToken=123456789123456

To correct the issue, you will have to ignore most of what you find on the Internet.  Most of the solutions I found meant either re-installing the entire server or attempting a workaround for every powershell script you have.

The easiest way is to change the system environment variables to remove the unneeded SMO.  As you see above, both version 13 and version 11 are installed.  2016 made a large amount of enhancements to the powershell module (sqlps) and you will want to use them.  Moreover, you probably want to use commands like Restore-Database without having to the do the messy v1 workaround.  Bring up the advanced properties of system properties on the server:

imagesp1

Now click Environment Variables.

imagesp2

On the screenshot above, notice that we highlighted PSModulePath. We will want to edit it.

imagesp3

Remove the following line in the “Edit System Variable” Path:
C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\

Only the below SQL server value should exist:
C:\Program Files (x86)\Microsoft SQLServer\130\Tools\PowerShell\Modules\

The entire line looks like this:
%SystemRoot%\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files (x86)\Microsoft SQL Server\130\Tools\PowerShell\Modules\
Feel free to change the above to a higher level of powershell.

Once you make the change, reboot, and you should be good to go!

Using T-SQL To Access Active Directory

I tend to need queries that tie a lot of sources together.  Although Linked Servers are good for this (with caveats), I often run into times where a Linked Server isn’t the perfect fit.  Active Directory is one of those problematic treasure hunts.  There are a lot of good resources out on the internet, but they don’t take into account the totality of the issue.

This past week, I needed to pull data into a job whereby T-SQL was required to grab the data from AD and incorporate it as part of the job.  However, there was a problem.  Whenever I would run either an OPENROWSET or OPENQUERY on our Microsoft SQL Server 2012 server, it would give me an error after 19,901 rows no matter what I did.  There were plenty of resources referring to 2005 and 2008, but none specific to 2012.  In the older versions, the resources showed a limit of 901 objects only being returned.  This has improved in 2012 and I cannot speak to 2014 or 2016 yet.  However, there wasn’t a definitive way of pulling back all the data into a single query or no real reference on what to do.

My solution isn’t perfectly elegant, but it does get the job done.  If anyone has a less “codey” way of handling it, I would love to hear from you.  What I did was create a temporary table and a simple table expression to hold the data and the letters of the alphabet (in this case I was looking at users).  The query should still work if you change from the user class to computer or groups.  You will only need to change the fields returned and the WHERE clause.   Essentially, we start by creating a temporary table for the data called #ADTable, followed by a table expression called @letterList that has the alphabet inserted into it.  We then declare a Cursor, pull in the first letter and use it as a variable in the insert query variable below.  This loops through each letter, populating #ADTable until there are no more letters.  We can then display all the information or do whatever we need!

I will show both the OPENROWSET and OPENQUERY functions below:

OPENROWSET –

IF OBJECT_ID(‘TEMPDB..#ADTable’) IS NOT NULL DROP TABLE ADTable;
CREATE TABLE #ADTable(
sAMAccountName nvarchar(255),
sn nvarchar(255),
givenName nvarchar(255),
displayName nvarchar(255),
objectGUID varbinary(max)
)
DECLARE @Letter CHAR(1)
DECLARE @letterList TABLE(letter CHAR(1));
DECLARE @sql nvarchar(max)

INSERT INTO @letterList
VALUES(‘a’),(‘b’),(‘c’),(‘d’),(‘e’),(‘f’),(‘g’),(‘h’),(‘i’),
(‘j’),(‘k’),(‘l’),(‘m’),(‘n’),(‘o’),(‘p’),(‘q’),(‘r’),(‘s’),
(‘t’),(‘u’),(‘v’),(‘w’),(‘x’),(‘y’),(‘z’)
DECLARE Letter_Cursor CURSOR FOR SELECT letter FROM @letterList;
OPEN Letter_Cursor;
FETCH NEXT FROM Letter_Cursor INTO @Letter;
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘INSERT INTO #ADTable (sAMAccountName, sn, givenName, displayName, objectGUID) SELECT TOP 19901 sAMAccountName, sn, givenName, displayName, objectGUID FROM OPENROWSET(
”ADSDSOOBJECT”,”User ID={AD Username};Password={AD Password};ADSI Flag=0x11;Page Size=70000”,
”SELECT sAMAccountName, sn, givenName, displayName, objectGUID
FROM ””LDAP://{your.domain}/ou={Some Org Unit},dc={Your},dc={Domain}””
WHERE objectClass=””User”” AND givenName = ‘ + ””” + @Letter + ‘*”” AND sn = ””*”” AND company = ””*”” ”) ‘
EXEC(@sql)
FETCH NEXT FROM Letter_Cursor INTO @Letter;
END;
CLOSE Letter_Cursor;
DEALLOCATE Letter_Cursor;

SELECT sAMAccountName, givenName, sn, displayName, CONVERT(NVARCHAR(MAX), objectGUID, 1) AS objectGUID FROM #ADTable;
DROP TABLE #ADTable

OPENQUERY –

IF OBJECT_ID(‘TEMPDB..#ADTable’) IS NOT NULL DROP TABLE #ADTable;
CREATE TABLE #ADTable(
sAMAccountName nvarchar(255),
sn nvarchar(255),
givenName nvarchar(255),
displayName nvarchar(255),
objectGUID varbinary(max)
)
DECLARE @Letter CHAR(1)
DECLARE @letterList TABLE(letter CHAR(1))
DECLARE @sql nvarchar(max)

INSERT INTO @letterList
VALUES(‘a’),(‘b’),(‘c’),(‘d’),(‘e’),(‘f’),(‘g’),(‘h’),(‘i’),
(‘j’),(‘k’),(‘l’),(‘m’),(‘n’),(‘o’),(‘p’),(‘q’),(‘r’),(‘s’),
(‘t’),(‘u’),(‘v’),(‘w’),(‘x’),(‘y’),(‘z’)

DECLARE Letter_Cursor CURSOR FOR SELECT letter FROM @letterList
OPEN Letter_Cursor
FETCH NEXT FROM Letter_Cursor INTO @Letter
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘INSERT INTO #ADTable (sAMAccountName, sn, givenName, displayName,
objectGUID) SELECT TOP 19901 sAMAccountName, sn, givenName, displayName,
objectGUID FROM OPENQUERY(
ADSI,
”SELECT sAMAccountName, sn, givenName, displayName, objectGUID
FROM ””LDAP://{your.domain}/ou={Your ORG Unit},dc={Your},dc={Domain}””
WHERE objectClass=””User”” AND givenName = ‘ + ””” + @Letter + ‘*”” ”) ‘
EXEC(@sql)
FETCH NEXT FROM Letter_Cursor INTO @Letter
END
CLOSE Letter_Cursor
DEALLOCATE Letter_Cursor

SELECT sAMAccountName, givenName, sn, displayName, CONVERT(NVARCHAR(MAX), objectGUID,
1) AS objectGUID FROM #ADTable

Note that you can add, subtract, or change the fields in the select statements, but you will need to make sure you do this in two places in the @SQL expression and the select at the bottom.  Also, I used single quotes for everything so when you see ”” that is four single quotes.  I hope this helps!

Tomcat PermGen Issues for the Confused

At my work we have multiple environments in our Application Development department.  They are separated by clusters in each environment with a load balancer on the front.  Most of the individual servers run httpd with mod_jk to talk to Tomcat.  One of the issues we experience is the PermGen space running out and all the cores hitting 100%.  Recently, this was happening on our production server.  We realized a few things.

Our Tomcat servers have several grails applications.  Our JDBC connection through Tomcat uses a single entry in server.xml for each database.  This means that all connections to the Tomcat server are funneled through this one tunnel back to the database.  As people open and close their connections, the garbage collection sees these objects and does not collect them.  Moreover, Grails tends to not optimize the queries it runs against the database and these “ad-hoc” queries are seen as new objects in Grails.  They are not collected.  Another thing to note is builds that are done from Jenkins or another build product.  Each new build is not collected completely by the garbage collector.  Over time, the PermGen space will be used up.

On our servers we noticed that when enough people were logged on to the application, the CPU would run high and eventually no more people could log on.  The best way to solve this issue is to add the following to the startup environment settings in Tomcat:

-XX:+EliminateLocks -XX:+UseBiasedLocking

As a side note, always make sure you put the following in your startup as well:

-server -XX:+UseG1GC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled

These settings help make better use of the memory and initiate garbage collection with the newer generation of settings.  If you look closely, you will see that the GenSweepingEnabled is actually an older, depreciated entry, but still useful and since Tomcat doesn’t complain, I like to leave it in there for peace of mind.

However, other tuning adjustments should be made to both the system and server.xml file to actually affect the PermGen space issue.  First, let’s look at how Tomcat uses memory.  I won’t purport to be an expert at memory utilization, but Tomcat tends to use swap space quite a bit.  In fact, because of its use of swap space, it can greatly slow down the server in terms of thread execution.  On Oracle systems, we try to adjust the swap in such a way that the server uses physical memory first.  It’s faster.  To that end, you can tell the server how much weight should be given to runtime memory as opposed to using page cache.  To see what yours is, type this:

cat /proc/sys/vm/swappiness

My systems, and I believe the Centos default, is 30.  We want to reduce that.  Simply type:

echo 5 | sudo tee /proc/sys/vm/swappiness

This will change the weight of cache to 5 in favor of runtime memory.  Now run:
Tomcat
cat /proc/sys/vm/swappiness

again to double-check the change.  Finally, we will need to open up /etc/sysctl.conf and add the following to it so the setting holds on reboot:

vm.swappiness=5

We also want to change the server.xml.  There are two settings that I directly look at on ours.  First are the maxThreads and the minSpareThreads.  The default looks like the following:

<Executor name=”tomcatThreadPool” namePrefix=”catalina-exec-”
maxThreads=”150″ minSpareThreads=”4″/>

The rule of thumb is to change this as a multiple of 200 per the amount of cores you have.  If you have two cores, use 400.  If you have four cores, use 800.  For our system, we will now make the entry look like this:
 <Executor name=”tomcatThreadPool” namePrefix=”catalina-exec-“
        maxThreads=”400″ minSpareThreads=”4″/>
You may also notice that under the <GlobalNamingResources> section, each JNDI resource will have certain parameters set.  Below is a sort of default for Tomcat 1.8:
maxTotal=”100″ maxIdle=”75″ maxWaitMillis=”30000″
Below is the old setup for the same line:
maxActive=”100″ maxIdle=”75″ maxWait=”30000″
I’ve mixed that up with maxActive, I don’t know how many times.  If it is earlier than 1.8, use maxActive.  We want to change the maxTotal to accomodate more connections.  I would do this in increments based on the amount of memory you have dedicated to Tomcat.  For now, a good setting might be:
 maxTotal=”400″ maxIdle=”75″ maxWaitMillis=”30000″
And Finally, if you want to diagnose issues with Heap and PermGen faster, I cannot recommend enough using VisualVM and installing sysstat on your Centos 6+ servers.  VisualVM is a java based heap and thread monitor.  It requires JMX to run and some configuration changes.  I will try to setup a tutorial and install guide in one of my next posts.  Sysstat will give you several tools, not the least of which is sar and iostat.  Use them, and good hunting!

JMX Connectivity for the Blind

While working on our Tomcat servers, I’m often astonished at how much I didn’t know that would make my life a lot easier.  Case in point, when we discovered the need to remove the Tomcat manager from all our servers, but still needed access to the information (such as…is this Tomcat application up?  Are there active sessions in the application? etc…), we weren’t sure what to do.  One of our biggest problems is we were required to remove the Tomcat manager as it is a PCI compliance issue and makes it very easy for an attacker to shutdown and modify your site.  From on high comes VisualVM to save the day!

VisualVM is great for showing CPU, threads, memory, PermGen space, and the like.  However, it has a plugin called MBeans that gives you a ton of information about your system.  To allow you to gather your data and yet keep the system secure, you will need to setup jmx.  JMX is a protocol that creates a bi-directional tunnel between a remote session and Tomcat’s JVM.  To set it up, you will need to add some environment variables to your Java startup and a line in your server.xml, like below:

Environment file additions:

-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Djava.rmi.server.hostname={Enter IP Here} -Dcom.sun.management.jmxremote.password.file=/usr/share/tomcat/conf/jmxremote.password -Dcom.sun.management.jmxremote.access.file=/usr/share/tomcat/conf/jmxremote.access -Dcom.sun.management.jmxremote=true

Server.xml File addtions:

<Listener className=”org.apache.catalina.mbeans.JmxRemoteLifecycleListener” rmiRegistryPortPlatform=”9055″ rmiServerPortPlatform=”9056″ />

Let me go into a little detail about where to add those.  In the environments I build, I try to have a separate startup parameters file so that it is always the same on every Tomcat cluster I create.  If you don’t have a separate file, that’s ok.  You would simply add it to the init.d/tomcat startup file or Catalina.sh or Startup.sh if you are using those.  You can simply add the above additions to the end of the list and save.

For the server.xml file, you simply need to make it the last line of listeners under
<Server port=”8005″ shutdown=”SHUTDOWN”>
before you add any global JNDI resources.

The next step is to ensure you open the proper ports for your system as shown in the rmi platform listings above.  The rmiRegistryPortPlatform makes the initial connection on port 9055 and then sets that particular session to 9056 when the connection is acknowledged.  This prevents jmx from using random ports for communication so you can keep your firewall secure.

As you may have also noticed, there is a jmxremote.password and jmxremote.access file that needs to be created.  The format is below:

jmxremote.access –
{name of user}          {readwrite, read, or write}

jmxremote.password –
{name of user}          {password}

With the access file, you can either have readwrite, read, or write access for the user.   Just remember to replace what I have in curly brackets above with the correct information.

Once you restart tomcat, you will be able to use the remote JMX console.  To test to see if it is listening, just type:

netstat -al | grep 9055.

You can then open VisualVM and make a connection.  We will look at that in another lesson

Query and Programming Comparisons OR CTEs and Why I Love Them

I’ve been a database administrator off and on for nearly seven years.  In that time, I’ve had a to do a lot of queries that were complex to answer simple questions.  One of the things you start to notice is that many queries actually reflections of programming logic.  As an example, say we have several tables that we need to pull data from, but we don’t want the query to be expensive (take a lot of time and resources).  We try the standard of setting up sub-queries through joins and find the WHERE logic creates a great deal of selectivity in the query.  It’s great we got our result set, but the query took two to five minutes with a ton of disk I/O and CPU.  That’s bad.

In programming, you learn to set things up with classes that have the contained methods you need making the main class extremely small.  For you procedural guys out there, its the same as creating functions in your program and having only a few calls to the functions to run the program.  It’s cleaner and tends to run faster (in my opinion).  The same approach can be true of database queries.

I was recently given an SSRS report and told to automate it and adjust it for the new file format the vendor wanted.  When the report ran, it took ten minutes to complete and had several Query Hints to limit processor utilization.  The report did have a couple of temporary tables, but the majority of the data was filtered through several nested joins and a VERY long list in the WHERE and ORDER BY clause.  This was no good.

Although I can’t put the original query here (It would be pages of reading) because it is proprietary for my work, sufficed to say, the query was 350 lines long.  This is what I did to change it.  First, I pulled the sub-queries away from the JOIN list and made them temporary tables.  I then created a series of indexes and statistics on the new temporary tables.  You cannot do this within a sub-query and it takes longer for the data to compile. I also added a Common Table Expression (CTE) before the main select query for history information.  This made the main query look much cleaner and smaller.  The beauty of the CTE is that it can be read like a table, expanded in the query itself, or to setup for a cursor (although I personally hate cursors).

Think of each temporary table and CTE (you can have more than one), as a function to procedural programming or a class to object oriented programming.  When you are ready for your main program to run, you are just calling the code above it.  Although, I can’t put the actual query here, I will show a skeletal structure of what it would look like instead:

CREATE TABLE #fingfangfoom (
testid INT IDENTITY,
test2 VARCHAR(250),
test3 VARCHAR(250)
);
SELECT empid, empname, city
INTO #fingfangfoom
FROM Employees

# CREATE TABLE #Customerfoo (
personid INT,
businessname VARCHAR(255),
address1 VARCHAR(255),
city VARCHAR(255),
zip VARCHAR(10)
);
SELECT custid, custname, address1, city, zip
INTO #Customerfoo
FROM Customers

CREATE TABLE #Orders (
shipname VARCHAR(255),
shipaddress VARCHAR(255),
shipcity VARCHAR(255),
shipzip VARCHAR(10),
orderdate datetime
);

WITH cte_history (custname, custid, transactionid, productid, productname, orderdate)
AS (
SELECT ca.custid, ca.custname, tr.transactionid, pr.productid, pr.productname, tr.orderdate
FROM CustomerArchive ca
RIGHT JOIN Transactions tr
ON ca.custid = tr.custid
LEFT JOIN ProductArchive pr
ON tr.productid = pr.productid
WHERE tr.quantity > 1
)
SELECT empname, custid, productid, O.orderdate, cf.address1, cf.city, cf.zip
FROM cte_history ch
INNER JOIN #Orders O
ON ch.custname = O.shipname
LEFT JOIN #Customerfoo cf
ON ch.custid = cf.personid
LEFT JOIN #fingfangfoom fff
ON fff.empname = ch.custname

This is a very simple example whereby we are pulling all employees that are customers and showing all transactions with the store by those employees.  Another words, how loyal are the employees.  However, we could further refine the query by showing a transaction count greater than five.  Note that this table creates a History CTE from the archive tables and joins them to the current product tables.  It’s a rough query as there will be plenty of NULL values in there.

The nice thing about the above query is that we can take the #Orders table and add an index to it in order to speed up the results.  Moreover, now that the front work was done by the CTE, the actual cost on the main select statement is low.  From my first example at the top of the post, I was able to get the ten minute report down to one minute thirty-two seconds.  If you build SSIS style jobs like I do, the amount of time each job takes can add up.

What we want to get at here is try to do as much as you can up front before you actually do the query (especially if it is a long one).  It will save you time, make it more readable, and make it much easier for someone to figure out if they have to look at it years later.