TSQL on the JOB (2)


This is a walkthrough based on the SQL On The JOB, please buy it Pointing up

First part here


The transactions assure the a set of operation is performed as an atomic operation aka all are executed, or no one it’s ran against the DBMS. The COMMIT operation defined the atomic submit of what is declared from the BEGIN clause.


The ROLLBACK assures that the DB state is reverted if any of the statement cannot be executed.


Relational Data



The db design involves the usage of design approach

each table on entities on our system is a flat table of information self contains

to create the relationship between existing table we use the JOIN clause to denormalize our data,


a column/s the uniquely will identity the row in the table

● has to be unique

○ use the UNIQUE constraint

○ has to be NOT NULL through the constraint

■optionally define a DEFAULT VALUE


column in a table the includes values of a column in another table


the FK column can be populated on with values contains in the referred column in the other table

Enforcing relationship

to enforce the relationship integrity use the CASCADE constraint on DELETE and UPDATE

check constraint

define complex check on data constraint for column data crud

unique index

when you want avoid duplication on insertion instead on relay on check constraint use unique index on the column



When you are looking for data in db you can speed up the searching process on particular column, using an index for the column.



the table contains 3 column: name, number, road. You can define an index on name to speed up the retrieval on name column because is the ‘natural’ way in this case context.

clustered index

it determines the order of physical order, so can be only one column for each table


non clustered index, it contains the order on specific column index that point to the physical row in the table

N> index speeds the WHERE and ORDER BY clause

N> index slow down the CRUD as it’s an external structure that has to maintained optimal


CREATE INDEX IX_Customers_Address ON Customers

( Address )

composite index

when it’s necessary to combine more than one column to have meaningful results you define and index on more than one column


covering index

when a query is fulfilled on some clause by an index



the composite index on city, state covers the query as the WHERE clause is on city and the result are columns in the index itself, aka the index is enough to fulfill results



the composite index is now not covering the query as ID is also required, this means to retrieve the values of ID still need to read on the physical row table.

fragmented index

adding and removing value in the index bring the index itself to be inefficient; the page representing the data on the disk holding the index become very fragmented so the ideal index linearity is lost aka reading sequential data means jumping around of the disk

N> use DROP INDEX IX_Customers_Address ON Customers and REBUILD it in fragmentation cases

SQL Injections

what is it?

replacing txt in on-fly built query can be dangerous in case of malicious input. sql injections is an aim to stick into parameters value some particular txt that will create troubles to the db.

how does it work?

there is an huge list of cases, the general idea is to use — and ‘ to change the meaning of the query to take control specially putting in place action in sub-query or to apply change to broader set of rows


UPDATE Customers SET Name =’Rio’ WHERE Name=‘Mario’

— with ‘ OR name <> ‘ in place of Mario

UPDATE Customers SET Name =’Rio’ WHERE Name=‘‘ OR Name <> ‘’

— it’s always true, note the attacker needs to know the table structure


UPDATE Customers SET Name =’Rio’ WHERE Name=‘Mario’

— with ‘ OR name in (DELETE FROM Customers) —

UPDATE Customers SET Name =’Rio’ WHERE Name=‘‘ OR name in (DELETE FROM Customers) –’

— the last dangling ‘ is paced by the — and the sub-query is executed



using PROC the value as are treated as literal so are stacked in the table without executing as in the web so even malicious usage of ‘ and similar will be not executed as in the web.



the DB task are usually batch based this means each executed statement has some mechanism to prevent partial updates of on set of data, aka an UPDATE that is affecting 100 rows if will fails @ position 99 will fire an error and the table will be reverted to the initial state, otherwise you would have 99 rows changed and just 1 untouched, this could cause of data errors

N>Rule All it happens in the right way or nothing


The cursor is a bunch or rows usually kept in a special part of the DBMS memory so we can operate at row level on them




FOR SELECT name, [address] FORM Customers



You can have a snapshot in memory of the grabbed rows in memory;


you can go back and for on the cursor rows


— open the cursor

OPEN my cursor

N> the variable list declared has to match the order and the type of the retrieved columns in the cursor

– loop

DECLARE @name varchar(100), @address nvarchar(250)

FETCH NEXT From mycursor INTO@name, @address



— do smth


— close it

CLOSE mycursor

– clean the used mem



Don’t use a Cursor if you can use a DML statement,


use cursor when some business logic has to be taken on each row that needs more logic in business process

N> usually this business logic is implemented by a the BL of the client application



Hope this paper has been useful!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s