Download presentation
Presentation is loading. Please wait.
Published byEvangeline Jacobs Modified over 6 years ago
1
Who is: A SQL developer A Developer A DBA A Something else? What? Who has: their database in sourcre control? a ci process for sql development? Who uses: ssdt ssms notepad the redgate tools, sql compare etc
2
A c# developer walks into a barand has a beer, a sql developer doesn’t walk into the bar because the last release he did deleted someone elses code. This is about how we make changes to our database schema and code, merge the changes with other people, test those changes and if happy deploy those changes *or* build deployment scripts for other environments The reason we do it is: We can work on the same code with other people, we can collaborate We can do things like refactor the code to improve it as we go along We can go to the pub knowing we (probably) haven’t broken anything We can start making releases simpler and more predictable, no more hours or even days gathering scripts via and then not getting them all run The reason we don’t do it? Those who don’t, why not? Seems too hard?? Tooling? DBA says no?
3
I have worked in support, as a dba and as a developer so well positioned to see the world from all angles. I worked as a DBA with SQL Server 2000 – 2008 and have switched between being a pure sql developer and c/c# development – I spent about 4 years just doing c# development and have come back to work with sql and am shocked that hardly anyone has their databases in source control let alone a full ci pipeline so I am doing whatever I can to try to change that – if you have any questions about managing a more agile development pipeline I really love talking about this stuff so come and grab me if you like
4
We start the process with our dev environment
- until we have chosen that we can’t sketch out what it will look like what are our options? notepad – who uses that? I learnt c using notepad and grep but that was a while ago and even though I use it every now and then it isn’t optimal ssms – this is the default you know the old adage, “give a man a fishing rod and he will use it to pick his nose?” – well maybe that isn’t exactly how it goes but the point is that ssms is for running queries, it isn’t a development ide – interestingly often the hardest part of setting up a ci pipeline is getting people to stop using ssms – and if you are struggling then a good halfway house is to put the redgate source control tool into ssms but ideally move developers away from it into ssdt… ssdt – who uses ssdt? ssdt gives us refactoring support so w can rename objects and more importantly when the change is deployed the object is renamed rather than dropped / created losing data we get a free deployment tool via dacpac’s – does anyone know what dacpacs are? we also get the dacfx api which is awesome, it basically lets us write c# code to query sql statements, generate new ones and deploy changes to a database, if you have ever tried to write a t-sql parser you will know how hard it is, the dacfx api gives us one for free 0xDBE is pretty cool, it is by jetbrains and is a pretty cool ui, if we didn’t have ssdt that is what I would use Toad, there are a few more ide’s I haven’t used them in any real anger – but take a look and see what there is for this demo we are going to be using ssdt
5
The first thing we need to do is get the code and schema out of the database and into text files – when getting our database into source control we want text files with the contents – we could put in a backup of the database of the mdf files but comparing those is basically impossible so it isn’t possible to effectively track changes. So what I am going to do is to create a new SSDT project and right click the project name and choose “Import Database” – this will import from a database into a dacpac alternatives to this are using SSMS to script out all the objects or use redgate’s sql compare if I was going to be using the redgate sql compare tool to do the deployments (and we could use ssdt and fit in sql compare if we wanted to) then I would extract the text using that tool, in fact you should extract the files using whatever tool you will use to put the objects back in again as each tool can do things in slightly different ways and we want to minimise the work that deployments do wherever possibe;. Once the code is imported we should build the code and we will find….
6
There are two common causes and I haven’t yet imported a database of any real age or size that worked straight away, there was one time but I had imported an empty database by mistake
7
The first cause is database references
So when ssdt compiles the code it checks that all referenced objects exist and are correct so if you have select id from table, it verifies that there is a table or view or something selectable called table and it returns a column called id. if the reference is in another database then you need to create a project for that database, import it into ssdt and either reference the project or build the dacpac and reference that this is an additional overhead but it really is well worth it
8
Lets just take a quick look at database references as they always trip people up,
there are a few different types – this database is cool as it means you can split your project into multiple projects and different teams or people can work on different projects without affecting each other or what I mainly use it for is for writing tSQLt tests – my tests go in a separate project with a “this” reference so each test can see all the objects in the main project, then when I deploy the test project to a test database, the tSQLt framework, tests and the actual code is deployed into one database and then when the production code goes up to different environments on the production code does and the tSQLt framework and tests DO NOT get deployed – really really really useful. Different database same server is for cross database calls – by the way cross database calls are basically the work of the devil and you should remove them, especially where you have two databases that make calls to each other… Different database different server is where you make cross database calls over linked servers – just so that everyone is clear, you can do these things and it will work but that doesn’t mean that it is a good idea. System databases – if you have select * from sys.sysprocesses then that is also validated, ssdt ships with a dacpac for msdb and master and that is basically what this does
9
The other cause of non compiling code is broken code – oh me sir, not me sir
who has broken code in their databases today? liars, liars! it really is too easy with sql to have broken code, if you deploy a table then have a reference to that table from a stored procedure then change the table or drop the table, the procedure still happily exists but can’t be run – sql ignores it until it is run and it will fail but ssdt validates it and shows you where you have code that doesn’t work. the thing to do here is to export the code and schema from the database into ssdt, check it all in and take a tag or something similar and then start deleting your broken code. as long as you have a way to get back to the code then it is fine to delete it – if you need it you can get back to it but with it in a broken state there isn’t much you can do (unless you fix it)
10
SO we now have our schema and code in ssdt but we also have reference data so this is where we have a foreign key to a lookup table, those values that are part of the code rather than part of the business data should be in source control.
11
With SSDT we get a copuple of scripts that we run either before a deployment or after, pre-deployment and post-deployment scripts. the process goes like this: ssdt compares your code to the target database it generates a list of changes – alter this, drop that, create the other then the pre-deploy script runs if it exists, the list of changes are run then then post deploy script is run what we can do is put insert statements or my preferred merge statements into the pre or post deploy script and they will be run everytime the build is published. a couple of things to note, we should make them idempotent, which means that whether you run them one or one hundred times they do the same thing – so don’t have inserts without checking whether the row exists – the easiest way from sql 2008 upwards is to use merge statements. I realise that there are a few warnings about using merge statements but in this context, small reference tables they are ideal. If anyone isn’t immedietly taken with the syntax of a merge statement – you basically have a join, an update, insert, delete all in one block – there are some things that can help – firstly I always point people to a blog by alex whittles about the merge statement syntax which breaks it down, then there is sp_generate_merge which creates a merge statement from a table or I have a free add-in called mergeui that can import from a database or you can add a new table using the definition inside the dacpac.
14
So we have our schema, code and reference data in ssdt, in text files – let’s check it all in,
I’m going to use git and I have git bash installed, you can use git from the command line or use a ui like sourcetree – if you are new to git I would highly recommend using sourcetree, even though I mainly use git bash I use sourccetree for anything complicated – the newer versions of visual studio also have great git support so I may even use that to check my code in I have already created a git repository and so I just need to do git add ., git commit and git push
15
It really makes no difference at all, not even the slightest bit what source control tool you use except you shouldn’t use visual source safe – it was least released 10 years ago and really – who here uses source safe, right go back to the office tomorrow and think about how to take it out and replace it with something else. If you use git then I use the command line version and also source tree when I get stuck
16
This is where it gets really fun, what we want to do is take our code that we have checked in, merge it with everyone elses changes, build it and deploy it. If we don’t use SSDT but another tool we can use that to take the .sql files, compare them to a database and deploy the changes *OR* if we use something like readyroll or liquibase then the upgrade scripts can be run We are going to use teamcity but you can use any build server, tfs, jjenkins, whatever as long as you have a way to automatically check out the code on a specific action, and run a set of steps or command lines and finally most importantly you have a way to get some notification about the status of the builds then you can use it So in teamcity I will create a new project and a new build and at this stage what it will do is monitor git for changes, as a change occurs it will pull the code and build the ssdt project – if you weren’t using ssdt to verify the code you have to deploy it, with ssdt it will make sure the code is valid before going anywhere near sql.
17
When creating a build there are some things that will make your life easier
1 – start small, don’t try to deploy every change to all environments on day one – but do have an idea where you want to get to, what would be good to be doing in 1 month, 6 months, 1 year 2 – get it building in visual studio on the build server, if the project doesn’t build because ssdt isn’t installed or whatever reason, it won’t build from the command line and doing it this way at least you get some feednback as to why 3 - get the process you will turn into build steps as a batch or powershell script so you can run it over and over and know how it will work, then you can take the different commands and put them into build steps
18
I said creating the build was where it gets exciting but this really really is exciting
what we need to do is take the dacpac that ssdt produces and compare it to a database and deploy it, we do this using sqlpackage.exe that comes with ssdt – we can us ethe dacfx api to deploy without using sqlpackage but it works well and is well documented so why make life hard for ourselves? now when we deploy what we should really do is know what state the production database is in and make sure a database that we deploy as part of our ci build is in the same state *but* this is likely something that you can do further down the line – when you do get to this stage you can look to using backups or snapshots or even containers – if anyone wants to know more about that grab me afterwards. so we are now at the stage where we have our code, we download it onto the build server, compile it into a dacpac, take the dacpac and deploy it to a CI specific database
19
The first step is to deploy to a ci database, but what we want to do si drive sqlpackage to either deploy to other environments or build deployment scripts sp releases effectively go from “hey ernie, we need to do a db release, ok gather the scripts but not that script with that thing in that broke, all the other ones but also remember to run them in this specific order and make sure you cross your legs and fingers when you do the release” to – “DBA please run the deployment script for build 1480” I like to drive sqlpackage.exe with a powershell scripr because there are loads of arguments to pass in and also you can do stuff like creating a sql connection to check if the database s available etc THE THING TO NOTE IS THAT YOU MUSY GET THEENVIRONMENTS IN SYNC and have a way to deploy environmental specific stuff easily – ideally have that sort of stuff in config tables rather than different procs for different environments
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.