Code highlighting

Showing posts with label Query. Show all posts
Showing posts with label Query. Show all posts

Monday, September 04, 2017

Development tutorial: insert_recordset using the Query class

Introduction

I am sure most of you are familiar with set-based X++ CUD operators: insert_recordset, update_recordset and delete_from. They allow performing database operations with a large number of records in a single roundtrip to the server, instead of a row-by-row type of operation, which depends on the number of rows being processed. As a result, they can provide a very significant boost in overall performance of a selected flow.

If not familiar or just need a refresher, you can read more about it by following the link to MSDN.

Problem statement

There's however one significant drawback with these operators - they are all compile-time constructs, so they lack the flexibility of the flow modifying the database request before it is set based on runtime state of the flow.

And, once the application is sealed, that will mean there is no way to modify the request even at compile-time, as X++ statements are not extensible.

Solution

In Microsoft Dynamics AX 2012 R3 a static method was added on the Query class, that allows to solve the two problems above for insert_recordset. This is of course now also available in Microsoft Dynamics 365 for Finance and Operations: Enterprise edition.

Note: update_recordset and delete_from are still not supported through the Query class.

Example

Imagine that we need to write a function that would copy selected sales order line information into a sales order line history table for one or more orders.

1. Data model

Here's how the data model for this table is defined in this example:

DEV_SalesLineHistory data model diagram
DEV_SalesLineHistory data model diagram

2. New DEV_SalesLineHistory table in Visual Studio

And here's how the new table looks in Visual Studio (I created a new model for it in a separate package dependent on Application Suite):

DEV_SalesLineHistory table in Visual Studio
DEV_SalesLineHistory table
Note I skipped all stuff non-essential to this example

3. Copy function through regular X++ insert_recordset statement

Let's first write the statement for inserting the history records using the regular insert_recordset operator:

public class DEV_Tutorial_InsertRecordset
{
    public static Counter insertXppInsert_Recordset(SalesId _salesId)
    {
        DEV_SalesLineHistory    salesLineHistory;
        SalesLine               salesLine;
        InventDim               inventDim;

        JournalPostedDateTime   postedDateTime = DateTimeUtil::utcNow();
        JournalPostedUserId     postedBy = curUserId();
        SalesDocumentStatus     postingType = DocumentStatus::PackingSlip;

        insert_recordset salesLineHistory
        (
            SalesId,
            LineNum,
            InventTransId,
            SalesQty,
            SalesUnit,
            InventSiteId,
            InventLocationId,
            PostedDateTime,
            PostedBy,
            PostingType
        )
        select SalesId, LineNum, InventTransId, SalesQty, SalesUnit from salesLine
            where salesLine.SalesId == _salesId
            join InventSiteId, InventLocationId, postedDateTime, postedBy, postingType from inventDim
                where inventDim.InventDimId == salesLine.InventDimId;

        return any2int(salesLineHistory.RowCount());
    }
}

As you can see, we do a simple select from SalesLine, specifying the exact fields, joined to selected fields from InventDim, where the field list also contains a few local variables to populate into the rows being inserted.
This is the standard syntax used with X++ insert_recordset statement, which all of you are familiar with.

4. Method signature for Query::insert_recordset()

Now let's convert the above to a Query, and call Query::insert_recordset() instead.
This method accepts three arguments:
  • An instance of a table record. This is where data will be inserted into. We can then use this variable to ask how many rows were inserted, for example.
  • An instance of a Map(Types::String, Types::Container), which defines the mapping of the fields to copy. In X++ operator, this had to be based on the specific order in the field selection lists in the select statement. 
    • The map key is the target field name.
    • The value is a container, which defines a pair of values:
      • the unique identifier of the QueryBuildDataSource object points to the table to copy the value from
      • the field name on the above data source to copy the value from
  • An instance of a Query class, which defines the select statement for the data, similar to what you see in the X++ version above.

As you can see from the above, it does not account for literals, as we did with variables in the X++ operator example.
That is currently not supported with this API.
We can however solve this through a use of a "temporary" table, as suggested below.

5. Define a new table to store calculable literals

Let us define a new table that will store the data required by our insert statement. That means it needs to contain four fields:
  • PostedDateTime
  • PostedBy
  • PostingType
  • SalesId - we'll use this to join to SalesLine. This could be sessionId or whatever is required to ensure concurrency and uniqueness
Here's how the table would look in Visual Studio designer:

DEV_SalesLineHistoryPostingDataTmp table definition

We can now populate this table with the required values and join it to our query.
After executing the bulk insert we can then delete the inserted row (if necessary).


Another possible implementation here could be to use a View, with computed  columns for the different literal values needed. You could select from a table that has only 1 row, like InventParameters or the like. This is however less flexible, as it'll be compiled in, while with a "temporary" table you could determine the values at runtime.

6. Write up the code using the Query::insert_recordset() method

Now we are all set to write the necessary code. It would look like below:

public class DEV_Tutorial_InsertRecordset
{
    public static Counter insertQueryInsert_Recordset(SalesId _salesId)
    {
        DEV_SalesLineHistory    salesLineHistory;
        
        Query query = new Query();
        QueryBuildDataSource qbdsSalesLine = query.addDataSource(tableNum(SalesLine));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesId));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, LineNum));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, InventTransId));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesQty));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesUnit));
        qbdsSalesLine.addRange(fieldNum(SalesLine, SalesId)).value(queryValue(_salesId));
        QueryBuildDataSource qbdsInventDim = qbdsSalesLine.addDataSource(tableNum(InventDim));
        qbdsInventDim.addSelectionField(fieldNum(InventDim, InventLocationId));
        qbdsInventDim.addSelectionField(fieldNum(InventDim, InventSiteId));
        qbdsInventDim.relations(true);
        QueryBuildDataSource qbdsPostingData = qbdsInventDim.addDataSource(tableNum(DEV_SalesLineHistoryPostingDataTmp));
        qbdsPostingData.addLink(fieldNum(SalesLine, SalesId), fieldNum(DEV_SalesLineHistoryPostingDataTmp, SalesId), qbdsSalesLine.name());
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostedDateTime));
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostedBy));
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostingType));

        Map targetToSourceMap = new Map(Types::String, Types::Container);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesId),           [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, LineNum),           [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, LineNum)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventTransId),     [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, InventTransId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesQty),          [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesQty)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesUnit),         [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesUnit)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventLocationId),  [qbdsInventDim.uniqueId(), fieldStr(InventDim, InventLocationId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventSiteId),      [qbdsInventDim.uniqueId(), fieldStr(InventDim, InventSiteId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostedDateTime),    [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostedDateTime)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostedBy),          [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostedBy)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostingType),       [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostingType)]);

        ttsbegin;
        
        DEV_SalesLineHistoryPostingDataTmp postingData;
        postingData.PostedDateTime = DateTimeUtil::utcNow();
        postingData.PostedBy = curUserId();
        postingData.PostingType = DocumentStatus::Invoice;
        postingData.SalesId = _salesId;
        postingData.insert();
        
        Query::insert_recordset(salesLineHistory, targetToSourceMap, query);

        delete_from postingData 
            where postingData.SalesId == _salesId;

        ttscommit;

        return any2int(salesLineHistory.RowCount());
    }
}

As you can see, the first part of the code builds a query using the QueryBuild* class hierarchy. The query is identical to the above select statement, with the addition of another join to our "tmp" table to retrieve the literals.
The second part populates the target to source Map object, which maps the fields to insert into to their source.
The third part actually invokes the operation, making sure we have the record populated in our "tmp" table beforehand.

Note. Because the first argument is the record we insert into, we can use it to get the RowCount(), telling us how many records have been inserted.

7. Extensibility aspect

Leaving the code just like it is does not actually make it extensible, as the partners would not be able to add additional fields to copy, or additional conditions/joins to the query. To accomplish this, we'd need to break the logic out into smaller methods with specific responsibilities, or add inline delegates to do the same. Generally speaking, you should always favor breaking the code down into simpler methods over delegates.
I've not done this in the example, as that's not the purpose, but you should follow these guidelines in production code.

8. Execute and test the code

We can execute these methods now, but first we need to ensure we have a test subject, aka a Sales order to practice with. Here's the one I used in the example:

A sales order with 3 order lines
And here's the Runnable Class that invokes the two methods above:

class DEV_Tutorial_InsertRecordset_Runner
{        
    public static void main(Args _args)
    {        
        const SalesId SalesId = '000810';

        if (SalesTable::exist(SalesId))
        {
            Counter xppInsert = DEV_Tutorial_InsertRecordset::insertXppInsert_Recordset(SalesId);
            Counter queryInsert = DEV_Tutorial_InsertRecordset::insertQueryInsert_Recordset(SalesId);

            strFmt('Tutorial Insert_Recordset');
            info(strFmt('Inserted using X++ insert_recordset = %1', xppInsert));
            info(strFmt('Inserted using Query::insert_recordset() = %1', queryInsert));
        }
    }

}

You can download the full project here.


Hope this helps! Let me know if you have any comments.

Monday, October 24, 2011

Tutorial: AX2012 - A new way of accessing the QueryBuildDataSource for a particular FormDataSource

When playing with some X++ code, I found an interesting addition that happened in AX 2012 related to form development.

There are 2 new methods that were added to the FormDataSource class, which allow you to very easily and error-free access the underlying QueryBuildDataSource, whether that is for the initial form query or for the queryRun that also contains user filters and sorting.

So, for example, instead of writing:

this.query().dataSourceTable(tableNum(InventTableModule), 2);
this.queryRun().query().dataSourceName(identifierStr(Purch));
....

you can use the 2 new methods:

this.queryBuildDataSource();
this.queryRunQueryBuildDataSource();


It might seem as a very minor improvement, but in reality it greatly simplifies maintenance of code on forms, when new datasources are added or existing ones removed/renamed.

That's all for today.

Friday, March 26, 2010

Tutorial: refresh, reread, research, executeQuery - which one to use?

X++ developers seem to be having a lot of trouble with these 4 datasource methods, no matter how senior they are in AX.
So I decided to make a small hands-on tutorial, demonstrating the common usage scenario for each of the methods. I have ordered the methods based on the impact on the rows being displayed in the grid.
You can download the xpo with the tutorial on my SkyDrive.

1. Common mistakes

Often, developers call 2 of the mentioned methods in the following order:
formDataSource.refresh()
formDataSource.research()

or
formDataSource.reread()
formDataSource.research()

or
formDataSource.research()
formDataSource.executeQuery()

or
formDataSource.research()
formDataSource.refresh() / formDataSource.reread()

All of these are wrong, or at least partially redundant.
Hopefully, after reading the full post, there will be no questions as to why they are wrong. Leave a comment to this post if one of them is still unclear, and I will try to explain in more detail.

2. Refresh

This method basically refreshes the data displayed in the form controls with whatever is stored in the form cache for that particular datasource record. Calling refresh() method will NOT reread the record from the database. So if changes happened to the record in another process, these will not be shown after executing refresh().
refreshEx
Does a redraw of the grid rows, depending on the optional argment for specifying the number of the record to refresh (and this means the actual row number in the grid, which is less useful for AX devs). Special argument values include -1, which means that all records will be redrawn, and -2, which redraws all marked records and records with displayOptions. Default argument value is -2.
This method should be used sparingly, in cases where multiple rows from the grid are updated, resulting in changes in their displayOptions, as an example. So you should avoid using it as a replacement for refresh(), since they actually have completely different implementations in the kernel.
Also, note, that refreshEx() only redraws the grid, so the controls not in the grid might still contain outdated values. Refresh() updates everything, since this is its intention.

3. Reread

Calling reread() will query the database and re-read the current record contents into the datasource form cache. This will not display the changes on the form until a redraw of the grid contents happens (for example, when you navigate away from the row or re-open the form).
You should not use it to refresh the form data if you have through code added or removed records. For this, you would use a different method described below.
How are these 2 methods commonly used?
Usually, when you change some values in the current record through some code (for example, when the user clicks on a button), and update the database by calling update method on the table buffer, you would want to show the user the changes that happened.
In this case, you would call reread() method to update the datasource form cache with the values from the database (this will not update the screen), and then call refresh() to actually redraw the grid and show the changes to the user.
Clicking buttons with SaveRecord == Yes
Each button has a property SaveRecord, which is by default set to Yes. Whenever you click a button, the changes you have done in the current record are saved to the database. So calling reread will not restore the original record values, as some expect. If that is the user expectation, you as a developer should set the property to No.

4. Research

Calling research() will rerun the existing form query against the database, therefore updating the list with new/removed records as well as updating all existing rows. This will honor any existing filters and sorting on the form, that were set by the user.
Research(true)
The research method starting with AX 2009 accepts an optional boolean argument _retainPosition. If you call research(true), the cursor position in the grid will be preserved after the data has been refreshed. This is an extremely useful addition, which solves most of the problems with cursor positioning (findRecord method is the alternative, but this method is very slow).

5. ExecuteQuery

Calling executeQuery() will also rerun the query and update/add/delete the rows in the grid. The difference in behavior from research is described below.
ExecuteQuery should be used if you have modified the query in your code and need to refresh the form to display the data based on the updated query.
formDataSource.queryRun().query() vs formDataSource.query()
An important thing to mention here is that the form has 2 instances of the query object - one is the original datasource query (stored in formDataSource.query()), and the other is the currently used query with any user filters applied (stored in formDataSource.queryRun().query()).
When the research method is called, a new instance of the queryRun is created, using the formDataSource.queryRun().query() as the basis. Therefore, if the user has set up some filters on the displayed data, those will be preserved.
This is useful, for example, when multiple users work with a certain form, each user has his own filters set up for displaying only relevant data, and rows get inserted into the underlying table externally (for example, through AIF).
Calling executeQuery, on the other hand, will use the original query as the basis, therefore removing any user filters.
This is a distinction that everyone should understand when using research/executeQuery methods in order to prevent possible collisions with the user filters when updating the query.

Tuesday, February 16, 2010

UtcDateTime in Dynamics AX 2009

In Dynamics AX 2009, Microsoft introduced a new data type, UtcDateTime, that is going to eventually replace the 2 existing types, Date and Time, which are still present in the application right now.
Obviously, the introduction of this new type requires a tutorial on how it can be used on forms, how you can filter on fields of this type, as well as what functions are available out of the box for it.
So I have made such a tutorial, and I hope it will be useful for developers upgrading to AX 2009.

Download the xpo for the tutorial from my SkyDrive

The tutorial consists of a single form, containing the following elements:
  • a grid, displaying data from CustTable
  • 4 buttons for various filtering actions
  • 3 controls that allow specifying the filtering conditions for the data
. In the form, you can see how UtcDateTime based controls are displayed both in a regular group and in a grid.

Dynamics AX UtcDateTime tutorial form

Below is an explanation of the implemented functionality, in form of a Question/Answer section:
  1. Q: Can I filter on the new UtcDateTime type, specifying the Date part only?
    A: Yes. You simply have to specify only the date part when applying the filter, like below. Note, that this also works fine when filtering directly from the UI (Ctrl+F).
    qbdsCustTable.addRange(fieldNum(CustTable, CreatedDateTime)).value(queryValue(DateFilter.dateValue()));
    What is interesting is how the kernel processes this range. In the below infolog, you can see that when viewing the query, it displays a "==" condition on a specific dateTime value.
    But in reality, as you can see from the SQL trace, a range ">= && <=" condition is applied to span exactly one day.
    Also note, that the values in the trace are displayed accounting for the TimeZone I am in, as well as for Daylight Saving Time

    SQL trace for Date filter on UtcDateTime field
  2. Q: Can I filter on the new UtcDateTime type, specifying the Time part only?
    A: No, this is not possible with a UtcDateTime type. The range applied when specifying a Time value is the minimum DateTime value, as seen below. Note, that in the SQL trace it is converted to "no range".

    SQL trace for Time filter on UtcDateTime field
  3. Q: Can I use similar query functions for UtcDateTime type?
    A: Yes. All the main existing functions for working with QueryBuildRange also support UtcDateTime. For example, in the infolog below you can see how a range on 2 UtcDateTime dates is applied. Global::queryRange method was used to achieve that. Note, again, that the SQL trace offsets the DateTime by the appropriate number of hours based on my location.

    SQL Trace for UtcDateTime range
  4. Q: How is the UtcDateTime stored in the database? Is it displayed the same way on forms?
    A: The UtcDateTime fields are in the database always stored in Coordinated Universal time (UTC). Whenever displayed on forms and bound to table fields, the data is converted to the user's preferred timezone. Note, that you need to take care of the conversion yourself, if the control is not bound to a field. For an example, see the init method of the tutorial form.
  5. Q: What standard helper functions are present for working with UtcDateTime type in the application?
    A: The main entry point for working with UtcDateTime type is the DateTimeUtil class. It allows adding Days/Months/Years, as well as applying an offset, getting the user's preffered timezone, converting from/to other types, etc. An example from the form init method is posted below:
        // getSystemDateTime() returns the current DateTime set in the system, not the current machine dateTime.
        // Note that getSystemDateTime() returns a UTC date and time, not your local date time.
        // In order to receive your local DateTime value, you should use methods applyTimeZoneOffset and specify the preferred time zone.
        utcDateTimeFilter.dateTimeValue(
            DateTimeUtil::applyTimeZoneOffset(
                DateTimeUtil::getSystemDateTime(),
                DateTimeUtil::getUserPreferredTimeZone()));
  6. Q: Does this mean that the support for Date and Time types has been removed?
    A: No, Date and Time are still supported. As you can see in the form init method, SystemDateGet(), timeNow(), today() are all still supported
  7. Q: I don't see the actual filter values in the SQL log. Instead, all I see are "?"'s. Also, how can I limit the number of data/fields selected from the database?
    A: This is just some extra stuff, not related to UtcDateTime, but still useful to know and pay attention to.
    CustTable has a very large number of fields, and I am only displaying 4 of those in the form, so it would be unwise to always query and return all of the fields. Luckily, the datasource has a property OnlyFetchActive, which controls the query behavior by only selecting the fields actually displayed on the form. Note, that you should avoid using this with editable datasources. See comments to this post for details
    As for "?"'s in the SQL trace - that is happening due to the use of placeholders. This in general optimizes the performance of the queries, by creating a query execution plan and storing it for future use. But it is possible, and is required in some specific cases, to force the use of literals (meaning the actual values of the ranges in the query). This can be done using the literals method on the query. See method init on the form for an example.

Sunday, February 07, 2010

Performance optimization: Deleting inventory journal lines

Preamble

As a developer, you should always consider performance implications of the code you write. In an ERP application like Microsoft Dynamics AX, the main focus should be on query execution, since it takes up the overwhelming part of the servers' resources.

You should always write queries that would execute the minimum amount of time and use the minimum amount of resources, at the same time producing the expected output in all cases.

Note, that performance is one of those things you cannot really verify on a 1-box install with a small test dataset. Most query problems show up only when tested with many users concurrently loading the AOS on a large-size database.

Code example

Table method
AOT\Data Dictionary\Tables\InventJournalTrans\Methods\delete
contains the following code:

if (this.Voucher)
{
if (this.numOfVoucherLines() == 0)
JournalError::deleteVoucher(tablenum(InventJournalTable),this.JournalId,this.Voucher);
}

The code is logically correct, deleting related records from the JournalError table, which contains error messages generated during validation and posting of journals.
But now let us consider the actual implementation. If we rephrase the conditions under which we delete the error message history, it would sound something like:
If a Voucher number is specified on the line being deleted, and there is no more lines in this journal that use this Voucher number, then we remove the JournalError records.

Deeper code analysis for performance

So, let's go in deeper, and open the code for method numOfVoucherLines:

Integer numOfVoucherLines()
{
return any2int((select count(RecId) from inventJournalTrans
where inventJournalTrans.JournalId == this.JournalId &&
inventJournalTrans.Voucher == this.Voucher).RecId);
}

As you can see, it counts all the journal lines with the specified Voucher number and JournalId.
Does it generate the expected output? Yes.
Is it optimal for the scenario in which it is used? No.

In order to determine if JournalError records can be deleted, we only need to know if at least 1 record with the specified Voucher number exists. So why do we need to search and count all such records? We don't.
Note that this code is being executed for each journal line, magnifying the impact of a non-optimal query N-fold.

Suggested solution

So, if we create a new method like this:

boolean voucherLineExist()
{
return (select firstfast firstonly forceplaceholders recId from inventJournalTrans
index hint VoucherIdx
where inventJournalTrans.journalId == this.journalId &&
inventJournalTrans.voucher == this.voucher).recId != 0;
}

and use it instead of numOfVoucherLines() in delete() method, we will greatly improve the overall performance when deleting journal lines.
It has been measured, that the time it takes to delete a journal with a large number of lines (88000, to be specific) has reduced by 6 times, which is fantastic.

Notes

  • forceplaceholders is used in the query explicitly to make sure a query plan is created and reused for this query, which further improves the performance, since the Voucher number most probably changes from line to line (depends on related journal name setup).
  • It is generally still a best practice to avoid creating inventory journals with such a large number of rows, because the journal is usually processed in 1 database transaction.

See also