Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Wednesday, March 28, 2012

Max function cannot be used in an indexed Views

All,
Does anyone know a workaround for the restriction to use max in an
indexed view? seems it really can help us to improve performance, but
we MUST use max to define the view...
any suggestions?
10x
GC
GC wrote:
> All,
> Does anyone know a workaround for the restriction to use max in an
> indexed view? seems it really can help us to improve performance, but
> we MUST use max to define the view...
> any suggestions?
> 10x
> GC
No way, and usully no need to do it either. My understanding of why is
as follows:
First reason, usually MAX and MIN can be selected from a proper index
with a quick index seek - there usually is no need to further speed it
up. Second reason, it would be not that simple to re-calculte MAX and
or MIN if you delete rows with that (MAX or MIN) value. So,
implementing that would be a lot of pain in the neck for little
practical advantage.
I think just using a proper index should be good enough under most
circustances.
Alex Kuznetsov
http://sqlserver-tips.blogspot.com/
http://sqlserver-puzzles.blogspot.com/
sql

Max function cannot be used in an indexed Views

All,
Does anyone know a workaround for the restriction to use max in an
indexed view? seems it really can help us to improve performance, but
we MUST use max to define the view...
any suggestions?
10x
GCThere isn't. This is because indexed views are built in such a way, that
each individual transaction can be processed without having to rebuild
the entire indexed view. This is not possible if there is an aggregate
such as AVG, MIN, MAX.
You could build your own 'materialized view' and maintain it using
triggers on the base table. However, deletes or updates of the maximum
value of a group might have serious performance impact. Proper indexes
could mitigate this problem.
HTH,
Gert-Jan
GC wrote:
> All,
> Does anyone know a workaround for the restriction to use max in an
> indexed view? seems it really can help us to improve performance, but
> we MUST use max to define the view...
> any suggestions?
> 10x
> GC|||GC wrote:
> All,
> Does anyone know a workaround for the restriction to use max in an
> indexed view? seems it really can help us to improve performance, but
> we MUST use max to define the view...
> any suggestions?
> 10x
> GC
No way, and usully no need to do it either. My understanding of why is
as follows:
First reason, usually MAX and MIN can be selected from a proper index
with a quick index seek - there usually is no need to further speed it
up. Second reason, it would be not that simple to re-calculte MAX and
or MIN if you delete rows with that (MAX or MIN) value. So,
implementing that would be a lot of pain in the neck for little
practical advantage.
I think just using a proper index should be good enough under most
circustances.
--
Alex Kuznetsov
http://sqlserver-tips.blogspot.com/
http://sqlserver-puzzles.blogspot.com/|||On 2 Jan 2007 05:01:25 -0800, GC wrote:
>All,
>Does anyone know a workaround for the restriction to use max in an
>indexed view? seems it really can help us to improve performance, but
>we MUST use max to define the view...
>any suggestions?
>10x
>GC
Hi GC,
I'm afraid you'll have to rol your own. You can duplicate the "under the
cover" implementation of an indexed view by creating a table to store
the results, and populating it by using INSERT INTO ... SELECT and then
the query you currently have in your view. Next, to keep it current, add
triggers on the base tables.
Simple example (untested) - note that the second update statement in the
trigger, the one that finds the new max if the old max gets deleted or
updated, can make the trigger very slow!!
CREATE TABLE dbo.TestTab
(a int NOT NULL PRIMARY KEY,
b int NOT NULL)
go
CREATE VIEW dbo.TestView WITH SCHEMABINDING
AS
SELECT b, MAX(a) AS MaxOfA
FROM dbo.TestTab
GROUP BY b;
go
CREATE UNIQUE CLUSTERED INDEX TestIndex ON dbo.TestView(b);
go
DROP VIEW dbo.TestView;
go
CREATE TABLE dbo.TestView
(b int NOT NULL PRIMARY KEY,
MaxOfA int NOT NULL);
INSERT INTO dbo.TestView (b, MaxOfA)
SELECT b, MAX(a) AS MaxOfA
FROM dbo.TestTab
GROUP BY b;
go
CREATE TRIGGER TrgTestView1
ON TestTab FOR INSERT, UPDATE, DELETE
AS
UPDATE TestView
SET MaxOfA = (SELECT MAX(a)
FROM inserted AS i
WHERE i.b = TestView.b)
WHERE EXISTS (SELECT *
FROM inserted AS i
WHERE i.b = TestView.b
AND i.a > TestView.MaxOfA);
UPDATE TestView
SET MaxOfA = (SELECT MAX(a)
FROM TestTab AS t
WHERE t.b = TestView.b)
WHERE EXISTS (SELECT *
FROM deleted AS d
WHERE d.b = TestView.b
AND d.a = TestView.MaxOfA);
go
DROP TABLE TestView;
DROP TABLE TestTab;
go
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||On Wed, 03 Jan 2007 00:54:05 +0100, Hugo Kornelis wrote:
(snip)
>Simple example (untested) - note that the second update statement in the
>trigger, the one that finds the new max if the old max gets deleted or
>updated, can make the trigger very slow!!
I just realized I forgot to put in code to add a row to TestView if a
new value for b is added, and code to remove a row if the last occurence
of a value is removed. Something like this:
INSERT INTO TestView (b, MaxOfA)
SELECT i.b, MAX(i.a)
FROM inserted AS i
WHERE NOT EXISTS
(SELECT *
FROM TestView AS t
WHERE t.b = i.b);
and
DELETE FROM TestView
WHERE EXISTS
(SELECT *
FROM deleted AS d
WHERE d.b = TestView.b)
AND NOT EXISTS
(SELECT *
FROM TestTab AS t
WHERE t.b = TestView.b);
The latter will, again, be slow.
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis

Max function cannot be used in an indexed Views

All,
Does anyone know a workaround for the restriction to use max in an
indexed view? seems it really can help us to improve performance, but
we MUST use max to define the view...
any suggestions?
10x
GCThere isn't. This is because indexed views are built in such a way, that
each individual transaction can be processed without having to rebuild
the entire indexed view. This is not possible if there is an aggregate
such as AVG, MIN, MAX.
You could build your own 'materialized view' and maintain it using
triggers on the base table. However, deletes or updates of the maximum
value of a group might have serious performance impact. Proper indexes
could mitigate this problem.
HTH,
Gert-Jan
GC wrote:
> All,
> Does anyone know a workaround for the restriction to use max in an
> indexed view? seems it really can help us to improve performance, but
> we MUST use max to define the view...
> any suggestions?
> 10x
> GC|||GC wrote:
> All,
> Does anyone know a workaround for the restriction to use max in an
> indexed view? seems it really can help us to improve performance, but
> we MUST use max to define the view...
> any suggestions?
> 10x
> GC
No way, and usully no need to do it either. My understanding of why is
as follows:
First reason, usually MAX and MIN can be selected from a proper index
with a quick index seek - there usually is no need to further speed it
up. Second reason, it would be not that simple to re-calculte MAX and
or MIN if you delete rows with that (MAX or MIN) value. So,
implementing that would be a lot of pain in the neck for little
practical advantage.
I think just using a proper index should be good enough under most
circustances.
Alex Kuznetsov
http://sqlserver-tips.blogspot.com/
http://sqlserver-puzzles.blogspot.com/|||On 2 Jan 2007 05:01:25 -0800, GC wrote:

>All,
>Does anyone know a workaround for the restriction to use max in an
>indexed view? seems it really can help us to improve performance, but
>we MUST use max to define the view...
>any suggestions?
>10x
>GC
Hi GC,
I'm afraid you'll have to rol your own. You can duplicate the "under the
cover" implementation of an indexed view by creating a table to store
the results, and populating it by using INSERT INTO ... SELECT and then
the query you currently have in your view. Next, to keep it current, add
triggers on the base tables.
Simple example (untested) - note that the second update statement in the
trigger, the one that finds the new max if the old max gets deleted or
updated, can make the trigger very slow!!
CREATE TABLE dbo.TestTab
(a int NOT NULL PRIMARY KEY,
b int NOT NULL)
go
CREATE VIEW dbo.TestView WITH SCHEMABINDING
AS
SELECT b, MAX(a) AS MaxOfA
FROM dbo.TestTab
GROUP BY b;
go
CREATE UNIQUE CLUSTERED INDEX TestIndex ON dbo.TestView(b);
go
DROP VIEW dbo.TestView;
go
CREATE TABLE dbo.TestView
(b int NOT NULL PRIMARY KEY,
MaxOfA int NOT NULL);
INSERT INTO dbo.TestView (b, MaxOfA)
SELECT b, MAX(a) AS MaxOfA
FROM dbo.TestTab
GROUP BY b;
go
CREATE TRIGGER TrgTestView1
ON TestTab FOR INSERT, UPDATE, DELETE
AS
UPDATE TestView
SET MaxOfA = (SELECT MAX(a)
FROM inserted AS i
WHERE i.b = TestView.b)
WHERE EXISTS (SELECT *
FROM inserted AS i
WHERE i.b = TestView.b
AND i.a > TestView.MaxOfA);
UPDATE TestView
SET MaxOfA = (SELECT MAX(a)
FROM TestTab AS t
WHERE t.b = TestView.b)
WHERE EXISTS (SELECT *
FROM deleted AS d
WHERE d.b = TestView.b
AND d.a = TestView.MaxOfA);
go
DROP TABLE TestView;
DROP TABLE TestTab;
go
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||On Wed, 03 Jan 2007 00:54:05 +0100, Hugo Kornelis wrote:
(snip)
>Simple example (untested) - note that the second update statement in the
>trigger, the one that finds the new max if the old max gets deleted or
>updated, can make the trigger very slow!!
I just realized I forgot to put in code to add a row to TestView if a
new value for b is added, and code to remove a row if the last occurence
of a value is removed. Something like this:
INSERT INTO TestView (b, MaxOfA)
SELECT i.b, MAX(i.a)
FROM inserted AS i
WHERE NOT EXISTS
(SELECT *
FROM TestView AS t
WHERE t.b = i.b);
and
DELETE FROM TestView
WHERE EXISTS
(SELECT *
FROM deleted AS d
WHERE d.b = TestView.b)
AND NOT EXISTS
(SELECT *
FROM TestTab AS t
WHERE t.b = TestView.b);
The latter will, again, be slow.
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis

Saturday, February 25, 2012

Master-detail subreports and performance.

If I create a master detail-report using subreports, will a separate query
run for each subreport at the time it loads? Lets say I have a report with
500 master rows, will it have to run 500 queries for the subreports before
the first page loads?
I know there is no on-demand subreport load, and from what I read you can't
do a heterogeneos joins between 2 record sets in the same report - so that
the detail recordset would filter under each master row based on an id.
So what is the best way to produce a large master-detail report with
reasonable performance?
Thanks.You are correct, the subreport is executed for each master record. Other
than making sure your database is optimized (correct indexes for example)
there isn't much you can do to speed it up the report running. However, if
the report is only needed for a particular time (for instance, this report
is run daily) you could use snapshots. Read up on that in books online.
The other possibility is to consider drill through. In that case you show
the master tables with a link that when the user wants additional data they
click on the link and use jump to report to pull up the additional data.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"David W" <spivey@.nospam.post.com> wrote in message
news:eSsZCOqlHHA.4628@.TK2MSFTNGP06.phx.gbl...
> If I create a master detail-report using subreports, will a separate query
> run for each subreport at the time it loads? Lets say I have a report
> with 500 master rows, will it have to run 500 queries for the subreports
> before the first page loads?
> I know there is no on-demand subreport load, and from what I read you
> can't do a heterogeneos joins between 2 record sets in the same report -
> so that the detail recordset would filter under each master row based on
> an id.
> So what is the best way to produce a large master-detail report with
> reasonable performance?
> Thanks.
>|||Thanks Bruce for confirming my suspicions. I'm definitely not doing a
report that generates over 500 queries to load a page regardless of database
optimizations or snapshot usage.
Well, I ended up doing a SQL join between my master recordset query and my
detail recordset query to produce one mammoth recordset containing all
fields. Then I grouped on the master columns, and displayed the detail
records as expandable sections. Its not efficient by any means but it works
reasonably well, and its only a single query, and it gives me the
master-detail effect.
Any idea when we are going to see the next version of Reporting Services.
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:%23AwSqovlHHA.4552@.TK2MSFTNGP04.phx.gbl...
> You are correct, the subreport is executed for each master record. Other
> than making sure your database is optimized (correct indexes for example)
> there isn't much you can do to speed it up the report running. However, if
> the report is only needed for a particular time (for instance, this report
> is run daily) you could use snapshots. Read up on that in books online.
> The other possibility is to consider drill through. In that case you show
> the master tables with a link that when the user wants additional data
> they click on the link and use jump to report to pull up the additional
> data.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "David W" <spivey@.nospam.post.com> wrote in message
> news:eSsZCOqlHHA.4628@.TK2MSFTNGP06.phx.gbl...
>> If I create a master detail-report using subreports, will a separate
>> query run for each subreport at the time it loads? Lets say I have a
>> report with 500 master rows, will it have to run 500 queries for the
>> subreports before the first page loads?
>> I know there is no on-demand subreport load, and from what I read you
>> can't do a heterogeneos joins between 2 record sets in the same report -
>> so that the detail recordset would filter under each master row based on
>> an id.
>> So what is the best way to produce a large master-detail report with
>> reasonable performance?
>> Thanks.
>|||What you are explaining here is drill down. It is a nice user interface but
not one I do too often because of the amount of data returned.
I would consider looking into using drill through in the future. Users are
very comfortable with it and you get really good performance.
The next release of RS (non-service pack) is with Katmai (next version of
SQL Server). RS is part of SQL Server and releases with both service packs
and major releases for SQL Server. Early on this was not true but now it is
totally in sync with the rest of SQL Server.
http://www.microsoft.com/presspass/press/2007/may07/05-09KatmaiPR.mspx
Right now they are just saying 2008.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"David W" <spivey@.nospam.post.com> wrote in message
news:u4QXU2vlHHA.3872@.TK2MSFTNGP04.phx.gbl...
> Thanks Bruce for confirming my suspicions. I'm definitely not doing a
> report that generates over 500 queries to load a page regardless of
> database optimizations or snapshot usage.
> Well, I ended up doing a SQL join between my master recordset query and my
> detail recordset query to produce one mammoth recordset containing all
> fields. Then I grouped on the master columns, and displayed the detail
> records as expandable sections. Its not efficient by any means but it
> works reasonably well, and its only a single query, and it gives me the
> master-detail effect.
> Any idea when we are going to see the next version of Reporting Services.
>
>
> "Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
> news:%23AwSqovlHHA.4552@.TK2MSFTNGP04.phx.gbl...
>> You are correct, the subreport is executed for each master record. Other
>> than making sure your database is optimized (correct indexes for example)
>> there isn't much you can do to speed it up the report running. However,
>> if the report is only needed for a particular time (for instance, this
>> report is run daily) you could use snapshots. Read up on that in books
>> online.
>> The other possibility is to consider drill through. In that case you show
>> the master tables with a link that when the user wants additional data
>> they click on the link and use jump to report to pull up the additional
>> data.
>>
>> --
>> Bruce Loehle-Conger
>> MVP SQL Server Reporting Services
>> "David W" <spivey@.nospam.post.com> wrote in message
>> news:eSsZCOqlHHA.4628@.TK2MSFTNGP06.phx.gbl...
>> If I create a master detail-report using subreports, will a separate
>> query run for each subreport at the time it loads? Lets say I have a
>> report with 500 master rows, will it have to run 500 queries for the
>> subreports before the first page loads?
>> I know there is no on-demand subreport load, and from what I read you
>> can't do a heterogeneos joins between 2 record sets in the same report -
>> so that the detail recordset would filter under each master row based on
>> an id.
>> So what is the best way to produce a large master-detail report with
>> reasonable performance?
>> Thanks.
>>
>