Code highlighting

Showing posts with label Operations. Show all posts
Showing posts with label Operations. Show all posts

Monday, September 04, 2017

Development tutorial: insert_recordset using the Query class

Introduction

I am sure most of you are familiar with set-based X++ CUD operators: insert_recordset, update_recordset and delete_from. They allow performing database operations with a large number of records in a single roundtrip to the server, instead of a row-by-row type of operation, which depends on the number of rows being processed. As a result, they can provide a very significant boost in overall performance of a selected flow.

If not familiar or just need a refresher, you can read more about it by following the link to MSDN.

Problem statement

There's however one significant drawback with these operators - they are all compile-time constructs, so they lack the flexibility of the flow modifying the database request before it is set based on runtime state of the flow.

And, once the application is sealed, that will mean there is no way to modify the request even at compile-time, as X++ statements are not extensible.

Solution

In Microsoft Dynamics AX 2012 R3 a static method was added on the Query class, that allows to solve the two problems above for insert_recordset. This is of course now also available in Microsoft Dynamics 365 for Finance and Operations: Enterprise edition.

Note: update_recordset and delete_from are still not supported through the Query class.

Example

Imagine that we need to write a function that would copy selected sales order line information into a sales order line history table for one or more orders.

1. Data model

Here's how the data model for this table is defined in this example:

DEV_SalesLineHistory data model diagram
DEV_SalesLineHistory data model diagram

2. New DEV_SalesLineHistory table in Visual Studio

And here's how the new table looks in Visual Studio (I created a new model for it in a separate package dependent on Application Suite):

DEV_SalesLineHistory table in Visual Studio
DEV_SalesLineHistory table
Note I skipped all stuff non-essential to this example

3. Copy function through regular X++ insert_recordset statement

Let's first write the statement for inserting the history records using the regular insert_recordset operator:

public class DEV_Tutorial_InsertRecordset
{
    public static Counter insertXppInsert_Recordset(SalesId _salesId)
    {
        DEV_SalesLineHistory    salesLineHistory;
        SalesLine               salesLine;
        InventDim               inventDim;

        JournalPostedDateTime   postedDateTime = DateTimeUtil::utcNow();
        JournalPostedUserId     postedBy = curUserId();
        SalesDocumentStatus     postingType = DocumentStatus::PackingSlip;

        insert_recordset salesLineHistory
        (
            SalesId,
            LineNum,
            InventTransId,
            SalesQty,
            SalesUnit,
            InventSiteId,
            InventLocationId,
            PostedDateTime,
            PostedBy,
            PostingType
        )
        select SalesId, LineNum, InventTransId, SalesQty, SalesUnit from salesLine
            where salesLine.SalesId == _salesId
            join InventSiteId, InventLocationId, postedDateTime, postedBy, postingType from inventDim
                where inventDim.InventDimId == salesLine.InventDimId;

        return any2int(salesLineHistory.RowCount());
    }
}

As you can see, we do a simple select from SalesLine, specifying the exact fields, joined to selected fields from InventDim, where the field list also contains a few local variables to populate into the rows being inserted.
This is the standard syntax used with X++ insert_recordset statement, which all of you are familiar with.

4. Method signature for Query::insert_recordset()

Now let's convert the above to a Query, and call Query::insert_recordset() instead.
This method accepts three arguments:
  • An instance of a table record. This is where data will be inserted into. We can then use this variable to ask how many rows were inserted, for example.
  • An instance of a Map(Types::String, Types::Container), which defines the mapping of the fields to copy. In X++ operator, this had to be based on the specific order in the field selection lists in the select statement. 
    • The map key is the target field name.
    • The value is a container, which defines a pair of values:
      • the unique identifier of the QueryBuildDataSource object points to the table to copy the value from
      • the field name on the above data source to copy the value from
  • An instance of a Query class, which defines the select statement for the data, similar to what you see in the X++ version above.

As you can see from the above, it does not account for literals, as we did with variables in the X++ operator example.
That is currently not supported with this API.
We can however solve this through a use of a "temporary" table, as suggested below.

5. Define a new table to store calculable literals

Let us define a new table that will store the data required by our insert statement. That means it needs to contain four fields:
  • PostedDateTime
  • PostedBy
  • PostingType
  • SalesId - we'll use this to join to SalesLine. This could be sessionId or whatever is required to ensure concurrency and uniqueness
Here's how the table would look in Visual Studio designer:

DEV_SalesLineHistoryPostingDataTmp table definition

We can now populate this table with the required values and join it to our query.
After executing the bulk insert we can then delete the inserted row (if necessary).


Another possible implementation here could be to use a View, with computed  columns for the different literal values needed. You could select from a table that has only 1 row, like InventParameters or the like. This is however less flexible, as it'll be compiled in, while with a "temporary" table you could determine the values at runtime.

6. Write up the code using the Query::insert_recordset() method

Now we are all set to write the necessary code. It would look like below:

public class DEV_Tutorial_InsertRecordset
{
    public static Counter insertQueryInsert_Recordset(SalesId _salesId)
    {
        DEV_SalesLineHistory    salesLineHistory;
        
        Query query = new Query();
        QueryBuildDataSource qbdsSalesLine = query.addDataSource(tableNum(SalesLine));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesId));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, LineNum));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, InventTransId));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesQty));
        qbdsSalesLine.addSelectionField(fieldNum(SalesLine, SalesUnit));
        qbdsSalesLine.addRange(fieldNum(SalesLine, SalesId)).value(queryValue(_salesId));
        QueryBuildDataSource qbdsInventDim = qbdsSalesLine.addDataSource(tableNum(InventDim));
        qbdsInventDim.addSelectionField(fieldNum(InventDim, InventLocationId));
        qbdsInventDim.addSelectionField(fieldNum(InventDim, InventSiteId));
        qbdsInventDim.relations(true);
        QueryBuildDataSource qbdsPostingData = qbdsInventDim.addDataSource(tableNum(DEV_SalesLineHistoryPostingDataTmp));
        qbdsPostingData.addLink(fieldNum(SalesLine, SalesId), fieldNum(DEV_SalesLineHistoryPostingDataTmp, SalesId), qbdsSalesLine.name());
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostedDateTime));
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostedBy));
        qbdsPostingData.addSelectionField(fieldNum(DEV_SalesLineHistoryPostingDataTmp, PostingType));

        Map targetToSourceMap = new Map(Types::String, Types::Container);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesId),           [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, LineNum),           [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, LineNum)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventTransId),     [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, InventTransId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesQty),          [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesQty)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, SalesUnit),         [qbdsSalesLine.uniqueId(), fieldStr(SalesLine, SalesUnit)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventLocationId),  [qbdsInventDim.uniqueId(), fieldStr(InventDim, InventLocationId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, InventSiteId),      [qbdsInventDim.uniqueId(), fieldStr(InventDim, InventSiteId)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostedDateTime),    [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostedDateTime)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostedBy),          [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostedBy)]);
        targetToSourceMap.insert(fieldStr(DEV_SalesLineHistory, PostingType),       [qbdsPostingData.uniqueId(), fieldStr(DEV_SalesLineHistoryPostingDataTmp, PostingType)]);

        ttsbegin;
        
        DEV_SalesLineHistoryPostingDataTmp postingData;
        postingData.PostedDateTime = DateTimeUtil::utcNow();
        postingData.PostedBy = curUserId();
        postingData.PostingType = DocumentStatus::Invoice;
        postingData.SalesId = _salesId;
        postingData.insert();
        
        Query::insert_recordset(salesLineHistory, targetToSourceMap, query);

        delete_from postingData 
            where postingData.SalesId == _salesId;

        ttscommit;

        return any2int(salesLineHistory.RowCount());
    }
}

As you can see, the first part of the code builds a query using the QueryBuild* class hierarchy. The query is identical to the above select statement, with the addition of another join to our "tmp" table to retrieve the literals.
The second part populates the target to source Map object, which maps the fields to insert into to their source.
The third part actually invokes the operation, making sure we have the record populated in our "tmp" table beforehand.

Note. Because the first argument is the record we insert into, we can use it to get the RowCount(), telling us how many records have been inserted.

7. Extensibility aspect

Leaving the code just like it is does not actually make it extensible, as the partners would not be able to add additional fields to copy, or additional conditions/joins to the query. To accomplish this, we'd need to break the logic out into smaller methods with specific responsibilities, or add inline delegates to do the same. Generally speaking, you should always favor breaking the code down into simpler methods over delegates.
I've not done this in the example, as that's not the purpose, but you should follow these guidelines in production code.

8. Execute and test the code

We can execute these methods now, but first we need to ensure we have a test subject, aka a Sales order to practice with. Here's the one I used in the example:

A sales order with 3 order lines
And here's the Runnable Class that invokes the two methods above:

class DEV_Tutorial_InsertRecordset_Runner
{        
    public static void main(Args _args)
    {        
        const SalesId SalesId = '000810';

        if (SalesTable::exist(SalesId))
        {
            Counter xppInsert = DEV_Tutorial_InsertRecordset::insertXppInsert_Recordset(SalesId);
            Counter queryInsert = DEV_Tutorial_InsertRecordset::insertQueryInsert_Recordset(SalesId);

            strFmt('Tutorial Insert_Recordset');
            info(strFmt('Inserted using X++ insert_recordset = %1', xppInsert));
            info(strFmt('Inserted using Query::insert_recordset() = %1', queryInsert));
        }
    }

}

You can download the full project here.


Hope this helps! Let me know if you have any comments.

Tuesday, July 18, 2017

Announcement: Extensibility documentation is now available

With the next release of Dynamics 365 for Finance and Operations Enterprise edition we plan to soft seal the Application Suite model.
For anyone not yet familiar with this agenda, please read through the previous announcement

The partner channel need to be educated for them to successfully migrate their development to using extensions instead of overlayering.

In order to help with that we have created a dedicated section on Extensibility in our documentation, which has a number of details about our plans, the process of migrating to extensions, as well as specific How to articles around platform capabilities, as well as specific application frameworks and how to extend them (coming soon...)

Home page for Extensibility documentation

If you cannot find a particular topic, let us know, either directly on the docs page, or here through comments to this post.

Hope this helps!

Thanks

Saturday, April 15, 2017

Development tutorial link: Extending the Warehouse management functionality after application seal

Background


At the Dynamics 365 for Operations Technical Conference earlier this year, Microsoft announced its plans around overlayering going forward. If you have not heard it yet, here's the tweet I posted on it:


 AppSuite will be "soft sealed" in Fall 2017 release and "hard sealed" in Spring 2018 - move to use  in your solutions

However the application SYS code still has a number of areas which are difficult to impossible to customize without overlayering. One of such areas is the Warehouse management area.

Improvements

In the SCM team we have been focusing on improving our Extensibility story in the application, which also includes the above mentioned warehousing area.

Below are a few posts Michael posted recently describing some of the changes we have done in the Warehouse management area, which are now available as part of Microsoft Dynamics for Operations Spring 2017 preview.
This should allow customizing the warehousing functionality to a large degree without the need to overlayer the code.

Extending WHS – Adding a new flow
Extending WHS – Adding a new control type
Extending WHS – Changing behavior of existing control types
Extending WHS – Adding a new custom work type
Extending WHS – Adding a new location directive strategy

Please look through these changes and let us know if YOUR functional enhancements are still impossible, so we can get them addressed.

Thanks

Friday, March 31, 2017

Development tutorial: SysExtension framework with SysExtensionIAttribute and an Instantiation strategy

Problem statement

This post will continue from where we left off in my previous blog post about the SysExtension framework: Development tutorial: SysExtension framework in factory methods where the constructor requires one or more arguments

In the above blog post I described how to use the SysExtension framework in combination with an Instantiation strategy, which applies in many cases where the class being instantiated requires input arguments in the constructor.

At the end of the post, however, I mentioned there is one flaw with that implementation. That problem is performance.

If you remember a blog post by mfp from a while back (https://blogs.msdn.microsoft.com/mfp/2014/05/08/x-pedal-to-the-metal/), in it he describes the problems with the SysExtension framework in AX 2012 R2, where there were two main issues:
  • A heavy use of reflection to build the cacheKey used to look up the types
  • Interop impact when needing to make Native AOS calls instead of pure IL
The second problem is not really relevant in Dynamics 365 for Operations, as everything runs in IL now by default.

And the first problem was resolved through introduction of an interface, SysExtensionIAttribute, which would ensure the cache is built by the attribute itself and does not require reflection calls, which immediately improved the performance by more than 10x.

Well, if you were paying attention to the example in my previous blog post, you noticed that my attribute did not implement the above-mentioned interface. That is because using an instantiation strategy in combination with the SysExtensionIAttribute attribute was not supported.

It becomes obvious if you look at the comments in the below code snippet of the SysExtension framework:
public class SysExtensionAppClassFactory extends SysExtensionElementFactory
{
    ...
    public static Object getClassFromSysAttribute(
        ClassName       _baseClassName,
        SysAttribute    _sysAttribute,
        SysExtAppClassDefaultInstantiation  _defaultInstantiation = null
        )
    {
        SysExtensionISearchStrategy         searchStrategy;
        SysExtensionCacheValue              cachedResult;
        SysExtModelAttributeInstance        attributeInstance;
        Object                              classInstance;
        SysExtensionIAttribute              sysExtensionIAttribute = _sysAttribute as SysExtensionIAttribute;

        // The attribute implements SysExtensionIAttribute, and no instantiation strategy is specified
        // Use the much faster implementation in getClassFromSysExtAttribute().
        if (sysExtensionIAttribute && !_defaultInstantiation)
        {
            return SysExtensionAppClassFactory::getClassFromSysExtAttribute(_baseClassName, sysExtensionIAttribute);
        }
        ...
    }
    ...
}

So if we were to use an Instantiation strategy we would fall back to the "old" way that goes through reflection. Moreover, it would actually not work even then, as it would confuse the two ways of getting the cache key.

That left you with one of two options:

  • Not implement the SysExtensionIAttribute on the attribute and rip the benefits of using an instantiation strategy, but suffer the significant performance hit it brings with it, or
  • Use the SysExtensionIAttribute, but as a result not be able to use the instantiation strategy, which limited the places where it was applicable

No more! 

We have updated the SysExtension framework in Platform Update 5, so now you can rip the benefits of both worlds, using an instantiation strategy and implementing the SysExtensionIAttribute interface on the attribute.

Let us walk through the changes required to our project for that:

1.

First off, let's implement the interface on the attribute definition. We can now also get rid of the parm* method, which was only necessary when the "old" approach with reflection was used, as that was how the framework would retrieve the attribute value to build up the cache key. 

class NoYesUnchangedFactoryAttribute extends SysAttribute implements SysExtensionIAttribute
{
    NoYesUnchanged noYesUnchangedValue;

    public void new(NoYesUnchanged _noYesUnchangedValue)
    {
        noYesUnchangedValue = _noYesUnchangedValue;
    }

    public str parmCacheKey()
    {
        return classStr(NoYesUnchangedFactoryAttribute)+';'+int2str(enum2int(noYesUnchangedValue));
    }

    public boolean useSingleton()
    {
        return true;
    }

}

As part of implementing the interface we needed to provide the implementation of a parmCacheKey() method, which returns the cache key taking into account the attribute value. We also need to implement the useSingleton() method, which determines if the same instance should be returned by the extension framework for a given extension.

The framework will now rely on the parmCacheKey() method instead of needing to browse through the parm methods on the attribute class.

2.

Let's now also change the Instantiation strategy class we created, and implement the SysExtensionIInstantiationStrategy interface instead of extending from SysExtAppClassDefaultInstantiation. That is not necessary now and is cleaner this way.

public class InstantiationStrategyForClassWithArg implements SysExtensionIInstantiationStrategy
{
...
}

The implementation should stay the same.

3. 

Finally, let's change the construct() method on the base class to use the new API, by calling the getClassFromSysAttributeWithInstantiationStrategy() method instead of getClassFromSysAttribute() (which is still there for backward compatibility):

public class BaseClassWithArgInConstructor
{
...
    public static BaseClassWithArgInConstructor construct(NoYesUnchanged _factoryType, str _argument)
    {
        NoYesUnchangedFactoryAttribute attr = new NoYesUnchangedFactoryAttribute(_factoryType);
        BaseClassWithArgInConstructor inst = SysExtensionAppClassFactory::getClassFromSysAttributeWithInstantiationStrategy(
            classStr(BaseClassWithArgInConstructor), attr, InstantiationStrategyForClassWithArg::construct(_argument));
        
        return inst;
    }
}

Result

Running the test now will produce the following result in infolog:

The derived class was returned with the argument populated in

Download

You can download the full project for the updated example from my OneDrive.


Hope this helps!

Development tutorial link: Extensibility challenges: Pack/Unpack in RunBase classes

Introduction + example

As you know, we have been focusing on extending our Extensibility story in the application, as well as trying to document the various patterns common to the application and how to address them if you are an ISV and need to extend some existing functionality.

mfp has recently written a blog post describing how you can extend the information shown on a RunBase-based dialog, and how to handle that information once the user enters the necessary data.

You can read through that particular example here: 
What that example did not describe is how to preserve the user entered data, so that next time the dialog is opened, it contains the last entries already populated. This is the typical pattern used across all AX forms and is internally based on the SysLastValue table.

In RunBase classes it is done through the pack and unpack methods (as well as initParmDefault).
For ensuring seemless code upgrade of these classes they also rely on a "version" of the stroed SysLastValue data, which is typically stored in a macro definition. The RunBase internal class state that needs to be preserved between runs is typically done through a local macro.
A typical example is shown below (taken from the Tutorial_RunBaseBatch class):

    #define.CurrentVersion(1)
    #localmacro.CurrentList
        transDate,
        custAccount
    #endmacro

    public container pack()
    {
        return [#CurrentVersion, #CurrentList];
    }

    public boolean unpack(container packedClass)
    {
        Version version = RunBase::getVersion(packedClass);
    
        switch (version)
        {
            case #CurrentVersion:
                [version,#CurrentList] = packedClass;
                break;
            default:
                return false;
        }

        return true;
    }

Just in short, what happens is that:

  • We save the packed state of the class with the corresponding version into the SysLastValue table record for this class, which means that all variables in the CurrentList macro need to be "serializable". 
    • The container will look something like this: [1, 31/3/2017, "US-0001"]
  • When we need to retrieve/unpack these values, we retrieve the version as we know it's the first position in the container.
    • If the version is still the same as the current version, read the packed container into the variables specified in the local macro
    • If the version is different from the current version, return false, which will subsequently run initParmDefault() method to load the default values for the class state variables 

Problem statement

This works fine in overlayering scenarios, because you just add any additional state to the CurrentList macro and they will be packed/unpacked when necessary automatically.

But what do you do when overlayering is not an option? You use augmentation / extensions.

However, it is not possible to extend macros, either global or locally defined. Macros are replaced with the corresponding text at compile time which would mean that all the existing code using the macros would need to be recompiled if you extended it, which is not an option.

OK, you might say, I can just add a post-method handler for the pack/unpack methods, and add my additional state there to the end of the container.

Well, that might work if your solution is the only one, but let's look at what could happen where there are 2 solutions side by side deployed:
  1. Pack is run and returns a container looking like this (Using the example from above): [1, 31/3/2017, "US-0001"]
  2. Post-method handler is called on ISV extension 1, and returns the above container + the specific state for ISV 1 solution (let's assume it's just an extra string variable): [1, 31/3/2017, "US-0001", "ISV1"]
  3. Post-method handler is called on ISV extension 2, and returns the above container + the specific state for ISV 2 solution: [1, 31/3/2017, "US-0001", "ISV1", "ISV2"]
Now, when the class is run the next time around, unpack needs to be called, together with the unpack method extensions from ISV1 and ISV2 solutions.

  1. Unpack is run and assigns the variables from the packed state (assuming it's the right version) to the base class variables.
  2. ISV2 unpack post-method handler is called and needs to retrieve only the part of the container which is relevant to ISV2 solution
  3. ISV1 unpack post-method handler is called and needs to do the same 

Steps 2 and 3 cannot be done in a reliable way. OK, say we copy over the macro definitions from the base class, assuming also the members are public and can be accessed from our augmentation class or we duplicate all those variables in unpack and hope nothing changes in the future :) - and in unpack we read the sub-part of the container from the base class into that, but how can we ensure the next part is for our extension? ISV1 and ISV2 post-method handlers are not necessarily going to be called in the same order for unpack as they were for pack.

All in all, this just does not work.

Note

The below line is perfectly fine in X++ and will not cause issues, which is why the base unpack() would not fail even if the packed container had state for some of the extensions as well.

[cn, value, value2] = ["SomeClass", 4, "SomeValue", "AnotherClass", true, "more values"];

The container being assigned has more values than the left side.

Solution

In order to solve this problem and make the behavior deterministic, we came up with a way to uniquely identify each specific extension packed state by name and allow ISVs to call set/get this state by name.

With Platform Update 5 we have now released this logic at the RunBase level. If you take a look at that class, you will notice a few new methods:
  • packExtension - adds to the end of the packed state (from base or with other ISV extensions) container the part for this extension, prefixing it with the name of the extension
  • unpackExtension - look through the packed state container and find the sub-part for this particular extension based on extension name.
  • isCandidateExtension - evaluates if the passed in container is possible an extension packed state. For that it needs to consist of the name of the extension + packed state in a container.
You can read more about it and look at an example follow-up from mfp's post below:

Hope this helps!

Thursday, March 30, 2017

Development tutorial: Platform improvements for handling X++ exceptions the right way

A while back I wrote about a pattern we discovered in X++ code which could lead to serious data consistency issues. You can read it again and look at an example mfp wrote up here:
http://kashperuk.blogspot.dk/2016/11/tutorial-link-handling-exceptions-right.html

With the release of Platform update 5 for Dynamics 365 for Operations we should now be better guarded against this kind of issue.

Let's look at the below example (needs to be run in USMF company):

class TryCatchAllException
{
    public static void main(Args _args)
    {
        setPrefix("try/catch example");

        try
        {
            ttsbegin;

            TryCatchAllException::doSomethingInTTS();

            ttscommit;
        }
        catch
        {
            info("Inside main catch block");
        }

        info(strfmt("Item name after main try/catch block: %1", InventTable::find("A0001").NameAlias));
    }

    private static void doSomethingInTTS()
    {
        try
        {
            info("Doing something");
   
            InventTable item = InventTable::find("A0001", true);
            item.NameAlias = "Another name";
            item.doUpdate();

            throw Exception::UpdateConflict;

            // Some additional code was supposed to be executed here
        }
        catch
        {
            info("Inside doSomething catch block");
        }
  
        info("After doSomething try/catch block");

    }

}

Before Platform Update 5 the result would be:


As you can see, we 
  • went into doSomething()
  • executed the update of NameAlias for the item, 
  • then an exception of type UpdateConflict was thrown
  • At this point the catch-all block caught this exception without aborting the transaction, meaning the item is still updated. We did not abort the transaction because we did not think about this case before
  • We exit the doSomething() and commit the transaction, even though we got an exception and did not want anything committed (because the second part of the code did not execute)
  • As a result, the NameAlias is still modified.

Now with Platform Update 5 the result will be:

That is, we

  • went into doSomething(),
  • executed the update of NameAlias for the item,
  • then an exception of type UpdateConflict was thrown
  • At this point the catch-all did not catch this type of exception, as we are inside a transaction scope, so the exception was unhandled in this scope, and went to the one above
  • Since we are by the outer scope outside the transaction, the catch-all block caught the exception,
  • and the NameAlias is unchanged


So, again, we will simply not handle the 2 special exception types (UpdateConflict and DuplicateKey) any longer in a catch-all block inside a transaction scope, you will either need to handle them explicitly or leave it up to the calling context to handle.

This will ensure we do not get into this erroneous code execution path where the transaction is not aborted, but we never handle the special exception types internally.

Hope this helps!

Saturday, March 18, 2017

Development tutorial: SysExtension framework in factory methods where the constructor requires one or more arguments

Background

At the Dynamics 365 for Operations Technical Conference earlier this week, Microsoft announced its plans around overlayering going forward. If you have not heard it yet, here's the tweet I posted on it:
The direct impact of this change is that we should stop using certain patterns when writing new X++ code.

Pattern to avoid

One of these patterns is the implementation of factory methods through a switch block, where depending on an enumeration value (another typical example is table ID) the corresponding sub-class is returned.

First off, it's coupling the base class too tightly with the sub-classes, which it should not be aware of at all.
Secondly, because the application model where the base class is declared might be sealed (e.g, foundation models are already sealed), you would not be able to add additional cases to the switch block, basically locking the application up for any extension scenarios.

So, that's all good and fine, but what can and should we do instead?

Pattern to use

The SysExtension framework is one of the ways of solving this erroneous factory method pattern implementation. 

This has been described in a number of posts already, so I won't repeat here. Please read the below posts instead, if you are unfamiliar with how SysExtension can be used:
In many cases in existing X++ code, the constructor of the class (the new() method) takes one or more arguments. In this case you cannot simply use the SysExtension framework methods described above. 

Here's an artificial example I created to demonstrate this:
  • A base class that takes one string argument in the constructor. This could also be abstract in many cases.
  • Two derived classes that need to be instantiated depending on the value of a NoYesUnchanged enum passed into the construct() method.
public class BaseClassWithArgInConstructor
{
    public str argument;

    
    public void new(str _argument)
    {
        argument = _argument;
    }

    public static BaseClassWithArgInConstructor construct(NoYesUnchanged _factoryType, str _argument)
    {
        // Typical implementation of a construct method
        switch (_factoryType)
        {
            case NoYesUnchanged::No:
                return new DerivedClassWithArgInConstructor_No(_argument);
            case NoYesUnchanged::Yes:
                return new DerivedClassWithArgInConstructor_Yes(_argument);
        }

        return new BaseClassWithArgInConstructor(_argument);
    }
}

public class DerivedClassWithArgInConstructor_No extends BaseClassWithArgInConstructor
{
}

public class DerivedClassWithArgInConstructor_Yes extends BaseClassWithArgInConstructor
{
}

And here's a Runnable class we will use to test our factory method:

class TestInstantiateClassWithArgInConstructor
{        
    public static void main(Args _args)
    {        
        BaseClassWithArgInConstructor instance = BaseClassWithArgInConstructor::construct(NoYesUnchanged::Yes, "someValue");
        setPrefix("Basic implementation with switch block");
        info(classId2Name(classIdGet(instance)));
        info(instance.argument);
    }
}

Running this now would produce the following result:
The right derived class with the correct argument value returned
OK, so to decouple the classes declared above, I created a "factory" attribute, which takes a NoYesUnchanged enum value as input.

public class NoYesUnchangedFactoryAttribute extends SysAttribute
{
    NoYesUnchanged noYesUnchangedValue;

    public void new(NoYesUnchanged _noYesUnchangedValue)
    {
        noYesUnchangedValue = _noYesUnchangedValue;
    }

    public NoYesUnchanged parmNoYesUnchangedValue()
    {
        return noYesUnchangedValue;
    }
}

Let's now decorate the two derived classes and modify the construct() on the base class to be based on the SysExtension framework instead of the switch block:

[NoYesUnchangedFactoryAttribute(NoYesUnchanged::No)]
public class DerivedClassWithArgInConstructor_No extends BaseClassWithArgInConstructor
{
}

[NoYesUnchangedFactoryAttribute(NoYesUnchanged::Yes)]
public class DerivedClassWithArgInConstructor_Yes extends BaseClassWithArgInConstructor
{
}

public class BaseClassWithArgInConstructor
{
    // ...
    public static BaseClassWithArgInConstructor construct(NoYesUnchanged _factoryType, str _argument)
    {
        NoYesUnchangedFactoryAttribute attr = new NoYesUnchangedFactoryAttribute(_factoryType);
        BaseClassWithArgInConstructor instance = SysExtensionAppClassFactory::getClassFromSysAttribute(classStr(BaseClassWithArgInConstructor), attr);

        return instance;
    }
}

Running the test now will however not produce the expected result:
The right derived class is returned but argument is missing
That is because by default the SysExtension framework will instantiate a new instance of the corresponding class (dictClass.makeObject()), which ignores the constructor arguments.

Solution

In order to account for the constructor arguments we need to use an Instantiation strategy, which can then be passed in as the 3rd argument when calling SysExtensionAppClassFactory.

Let's define that strategy class:

public class InstantiationStrategyForClassWithArg extends SysExtAppClassDefaultInstantiation
{
    str arg;

    public anytype instantiate(SysExtModelElement  _element)
    {
        SysExtModelElementApp   appElement = _element as SysExtModelElementApp;
        Object                  instance;

        if (appElement)
        {
            SysDictClass dictClass = SysDictClass::newName(appElement.parmAppName());
            if (dictClass)
            {
                instance = dictClass.makeObject(arg);
            }
        }

        return instance;
    }

    protected void new(str _arg)
    {
        this.arg = _arg;
    }

    public static InstantiationStrategyForClassWithArg construct(str _arg)
    {
        return new InstantiationStrategyForClassWithArg(_arg);
    }
}

As you can see above, we had to

  • Define a class extending from SysExtAppClassDefaultInstantiation (it's unfortunate that it's not an interface instead). 
  • Declare all of the arguments needed by the corresponding class we plan to construct.
  • Override the instantiate() method, which is being invoked by the SysExtension framework when the times comes
    • In there we create the new object instance of the appElement and, if necessary, pass in any additional arguments, in our case, arg.
Let's now use that in our construct() method:

public class BaseClassWithArgInConstructor
{
    //...
    public static BaseClassWithArgInConstructor construct(NoYesUnchanged _factoryType, str _argument)
    {
        NoYesUnchangedFactoryAttribute attr = new NoYesUnchangedFactoryAttribute(_factoryType);
        BaseClassWithArgInConstructor instance = SysExtensionAppClassFactory::getClassFromSysAttribute(
            classStr(BaseClassWithArgInConstructor), attr, InstantiationStrategyForClassWithArg::construct(_argument));

        return instance;
    }
}

If we now run the test, we will see the following:
The right derived class with the correct argument value returned 

Note

If you modify the attributes/hierarchy after the initial implementation, you might need to clear the cache, and restarting IIS is not enough, since the cache is also persisted to the DB. You can do that by invoking the below static method:

SysExtensionCache::clearAllScopes();

Parting note

There is a problem with the solution described above. The problem is performance.
I will walk you through it, as well as the solution, in my next post.

Thursday, January 19, 2017

Warehouse Mobile Devices Portal - Now as an App for your mobile device

Our team has been working on an application that would eventually replace the existing web-site based Warehouse Mobile Devices Portal (WMDP), as we see the trend of warehouses relying more and more on phone-based devices with added scanner capabilities (Honeywell Dolphin 75e, for example).

We are happy to announce that the app is now publicly available.
Installation and configuration instructions,  as well as download links, can be found on the Dynamics 365 for Operations wiki website:
https://ax.help.dynamics.com/en/wiki/install-and-configure-dynamics-365-for-operations-warehousing/

The app works on Windows 10 devices, as well as Android devices. iOS is not supported, since such devices are generally not used in warehouses due to high cost and some technical limitations.

The app only works with Dynamics 365 for Operations, so older releases like AX 2012 R3 can only use the old WMDP web site.

The app is an alternative front-end to all the business logic and screen generation logic in X++, so does not contain any business logic inside, similar to the old WMDP.

As you will see, we have re-worked the user interface, focusing on showing as few controls as possible at once, adding cards for showing grouped data like on-hand info, work list, etc. A special keyboard to entering quantities, support for scanning, etc.

Here is how the app looks on a Windows Phone:

Log in screen for new WMDP app


We are very excited to get this out into your hands and are looking for feedback. So don't be shy :)

Update 2017-01-20: Read a much more detailed description from Markus on the SCM blog:
https://blogs.msdn.microsoft.com/dynamicsaxscm/2017/01/20/announcing-dynamics-365-for-operations-warehousing/

Thursday, November 24, 2016

Tutorial Link: Handling exceptions the right way in X++

Michael and I have been working the last couple of weeks to uncover some of the difficult to repro bugs in Warehouse management code, and one of the things we discovered is a pattern which can lead to very unpredictable behavior when used incorrectly.

I encourage all of you to read it and make sure all of your code is up to standard.

https://blogs.msdn.microsoft.com/mfp/2016/11/24/x-the-catch/


Thanks!

Friday, November 04, 2016

Tutorial Link: Executing outbound work with pending demand replenishment work

Introduction

In Dynamics 365 for Operations we solved one of the long-standing complaints, where large work orders could not be started because of pending replenishment. A typical workaround then would be to artificially broke down the replenishment lines into a separate work order, so workers can do the picking for the majority of stuff. Then of course you'd get into problems with merging the two (or more) Target LPs onto one (which we now also support - see my previous blog post).

Read the feature description and and walk through a sample flow on our SCM blog:
https://blogs.msdn.microsoft.com/dynamicsaxscm/2016/11/04/processing-work-that-is-awaiting-demand-replenishment/

For those on AX 2012 R3

We have not back-ported this feature to AX 2012 R3 yet. We have it in the backlog, but no ETA for when that will happen.

Update: This is now available on LCS under KB3205828

Feedback

We'd love to hear your feedback on this feature if you are going to use it in your production environments.



Wednesday, November 02, 2016

Tutorial: License plate consolidation in Dynamics 365 for Operations (1611)

Introduction to scenarios

The Microsoft Dynamics AX Warehouse management module supports a number of advanced scenarios for warehouses, thanks to the flexibility offered by the concept of Work and work lines, which take care of any and all operations in the warehouse. The system has a number of complex configuration possibilities that determine how work is created. At the same time, AFTER it has been created, work is fixed and any necessary changes require a lot of manual supervisor interventions, or, in many cases, are not possible at all.

Imagine the following two scenarios:

Scenario 1

A customer orders a number of items from us. Based on our work template setup, multiple work orders are created to pick these items from the warehouse, say, some are picked from a cooled area, while the rest is coming from the regular picking area. All of the items are placed into a staging area location after picking, to be loaded on a delivery truck.

Scenario 2

A customer orders a number of items from us. A sales order is created in our system, released to warehouse, so work is created. The picking commences, and the goods arrive at the staging location, but are not shipped the same day, because the truck to pick them up only arrives tomorrow. The same evening, the same customer orders more items from us. Correspondingly, a new sales order is created and released to warehouse, creating more work. The pick is completed, and the goods are placed in the staging location ready to be loaded the next day. The warehouse manager decides to ship both shipments on the same truck tomorrow, adding the second shipment to the same load that contains the first shipment.


In both scenarios, we now have two license plates on the same load, that are going to the same customer, sitting in the same location. When the truck comes, they will need to be loaded separately one by one, even though in many cases, the items would fit perfectly fine on just one license plate.
Until now, the warehouse worker had no way to merge the two license plates together to ship everything on just one license plate, at least in the system (probably does happen outside of the system already).

In the Fall release of Microsoft Dynamics 365 for Operations (1611) we have added the ability of consolidating items on a license plate with items on another license place within the same location, where there is work behind one or more of the license plates. This is supported by a new type of mobile device menu item with Indirect Activity code “Consolidate license plates”.

Scenario walkthrough

All you need to do to enable this mobile device flow is create a new mobile device menu item that looks like the screenshot below:


LP consolidation mobile device menu item
For the scenario walk-through I have created two sales orders as below:
  • Sales order 000781 for customer US-001, which contains two lines:
    • 10 ea of item M9200 from warehouse 51
    • 15 ea of item M9201 from warehouse 51
  • Sales order 000782 for customer US-001, which contains one line:
    • 7 ea of item M9201 from warehouse 51

Sales order 000781 was released to warehouse first, before 000782 existed, resulting in the creation of Shipment USMF-000008 on Load USMF-000010.
Sales order 000782 was then added to the same Load, and also released to warehouse, resulting in the creation of Shipment USMF-000009.


The following picking work was created for these 2 shipments:

Work order details
This scenario corresponds directly to Scenario 2 I described above.

Now, for both work orders the initial picks are executed (one or different workers), and for both the goods end up at the STAGE location, awaiting loading, as shown below:

Work order details after initial pick was executed
From here on, license plates TLP_001 and TLP_002 follow the same path in the warehouse, more specifically they both will be picked up from STAGE and loaded into the truck at BAYDOOR location.

If the warehouse worker at the outbound dock makes the decision to consolidate these two license plates (currently, this is only an ad-hoc decision by the worker, planned consolidations are not supported), he can do that using the above mobile device menu item. Here is how the flow would look on the mobile device:

Step 1

You are presented with a screen, where you need to enter the Target LP. This is the License plate, where the items will end up after the consolidation.
  • This can be a new License plate. There is no way to print this new LP Tag from this screen at this point, however, so keep that in mind.
  • This can be an existing empty License plate. This could, for example, be useful, if you want to merge the contents of 2 half-pallets onto 1 empty euro pallet. Another case is if the pallet, where the items currently are placed, is damaged or “unshippable”.
  • This can be an existing full License plate, which is set as a Target LP on an existing work order with Work order type “Sales order” or “Transfer order issue
  • This should not be work that has a Container Id on the header or any of the lines.

LP Consolidation, Step 1

Step 2

Target LP was accepted, and now you are asked to scan in the LP you want to merge onto the Target LP.
  • LP to merge needs to be a target license plate for an existing work order of Work Order Type  Sales Order or Transfer Order Issue.
  • The LP to merge needs to be in the same location as the Target LP, and it cannot be the final shipping location (that’s too late, since goods are already Picked at this point).
  • The LP to merge needs to relate to the same Load as the Target LP
    • It should also have the same delivery information, if there are more than one shipment involved. Namely, the Delivery name, Customer account, Delivery address and Mode of Delivery should match.
  • The remaining steps in the flow for both work orders being merged need to match. This is done so we ensure all relevant steps are executed, for example, that the labels are printed at the appropriate time in the flow for all consolidated items.
  • This should not be work that has a Container Id on the header or any of the lines.
LP consolidation, Step 2

Step 3

When LP to merge is accepted, the worker is presented with a confirmation screen, that shows a summary of all items on that license plate. This is to help ensure that he scanned in the right LP before the actual consolidation of the two work orders happens.

In the case below, there was only one item on TLP_002, with a total quantity of 7.00 ea

LP consolidation, Step 3

Step 4


Once the worker confirms by pressing OK, the consolidation is executed, merging the two work orders together and moving all items from LP to merge to the Target LP.


The worker gets a confirmation message that the license plates were merged, and is presented with a screen where he can continue scanning in any following license plates to merge onto the Target LP.

LP consolidation, Step 4

Here is how the work looks after consolidation happens:


The work related with LP to merge, in our scenario walk-through that is TLP_002, is now marked as Closed, and by reviewing the work lines you can see the final Put step was changed, so it now points to STAGE location instead of BAYDOOR. The only thing that changed is which license plate it is put onto, namely TLP_001 instead of TLP_002.

Work order details after LP consolidation - Merge From work
The work related with Target LP, in our scenario walk-through that is TLP_001, is still In progress, and you can see the quantities on the remaining Pick/Put pair were increased accordingly with the quantity from work related with TLP_002.

Work order details after LP consolidation - Consolidated work

If you review the corresponding work transactions, you will see that an extra transaction corresponding to the load line for M9201 from sales order 000782 has been added to the Pick line to represent the additional Pick quantity.

Accordingly, the inventory transactions also reflect the fact items were moved from TLP_002 to TLP_001, and the new work reservations are based on that as well.

Mobile device menu item configuration option “Cancel remaining origin work lines

We expect most companies to run with this configuration option turned on. It comes into play in the following situation: When there are more subsequent staging steps on both work orders, this configuration will enable you to forfeit any of the steps on the work being merged from, so the steps on the consolidated work will be the ones executed for all merged items. That specifically means that any “extra” steps on the work being merged will be Cancelled.

If there are some specific reasons why all work steps need to be executed for the merged work order separately, you should not enable the configuration option. As a result, however, you will not be able to consolidate this LP to another one.

See a more detailed explanation in the Help Text for this configuration on the menu item.

For those behind on updates :)

This feature has been back-ported to Microsoft Dynamics AX 2012 R3.
You can download it from KB number 3190562


Let us know what you think of this new feature!

Tutorial: Movement of inventory with associated work in Warehouse management, Dynamics 365 for Operations (1611)

Introduction to supported scenarios

For the Fall release of Dynamics 365 for Operations (1611) we have built various features to support the theme of increasing the flexibility in the daily operations of warehouse workers.

Imagine the following scenarios:

Scenario 1

A company has a relatively small receiving area, and it’s congested with pallets and boxes awaiting put away. A large shipment is expected on this day, so the receiving clerk decides to clear up the receiving area, moving some of the pallets to a secondary inbound staging area.

Scenario 2

An experienced warehouse worker going around the warehouse notices an opportunity to consolidate items in one location instead of having them spread out across 3 nearby locations with a little quantity in each. He wants to move items from each of these locations into the same location onto the same license plate, consolidating the quantity.

Scenario 3

A pallet is awaiting shipment in a staging location, say, STAGE01, which is near BAYDOOR01. However due to a change of plans the truck is going to arrive to BAYDOOR04. The shipping clerk is aware of this and needs to ensure the truck does not have to hang around waiting to be loaded from STAGE01. The shipping clerk therefore decides to move the items in that shipment from STAGE01 to STAGE04, much closer to their new destination.


All of these scenarios are not possible today due to one simple fact – the items that need to be moved have work pending for them, meaning they are physically reserved on the warehouse location level (or even the license plate level) and therefore cannot be moved.

We have built this capability in for the Fall release of Microsoft Dynamics 365 for Operations (1611). Now you can decide, which warehouse workers are allowed to move reserved inventory, and which are not. This will allow some regulated warehouses the flexibility for cases where they may not accept that a worker can decide upon a new pick location from an already created pick work, or that a warehouse manager would like to steer which capabilities his un-experienced worker should have.

Scenario 2 walkthrough

In the standard demo data in company USMF we already have some data that can help showcase this new scenario on warehouse 24.
There are two sales orders, order 000748 and order 000752, both of which are planning to ship 10 pcs of A0001, and both have been released to warehouse, so corresponding work orders USMF-000001 and USMF-000002 exist, both to pick 10 pcs of A0001 from location FL-001. There is a total of 100 pcs of A0001 in this location, but only 80 is physically available because of the two work order reservations.
So if warehouse worker 24 tried to open the Movement mobile device menu item on his mobile device, he would see the following picture for location FL-001:

Moving physically available quantity

As you can see, the worker is only allowed to move 80 of the 100 pcs physically present in the location. Let’s fix that, and configure the worker to allow him moving reserved inventory.

Configure worker to allow movement of inventory with associated work

Now, if we go into the movement flow on the mobile device again, the screen will look as below:

Moving all physical inventory from a location

Now that the worker is allowed to move reserved inventory, he can move all 100 pcs of item A0001. Let’s go ahead and do that, moving the items to FL-007 to a new license plate LP_V_001.

Movement of inventory - To information

Let’s now review what happened behind the scenes:
  1. A new Inventory Movement work was created, from FL-001 to FL-007, for 100 pcs of A0001. It was immediately executed, and there the Work status is Closed.
  2. All related work orders will be updated, so they point the Pick line to location FL-007 instead of location FL-001, as you can see on the screenshot below.
Work order after inventory was moved to FL-007

Now location FL-001 is empty and can be used according to what warehouse worker 24 had in mind, for example, to put away the goods just reported as finished (and FL-007 was smaller in size and did not fit the RAFed pallet).

The other two scenarios are pretty much the same in terms of the flow, with the only difference being the reservations behind.

Current limitations

  • The work reservations that are possible to move as of today are limited to Sales order, Transfer order issue, Transfer order receipt, Purchase order and Replenishment.
  • Moving the items is restricted in a way that prevents splitting of work lines. So if you have a work line for 100 pcs of item A from location Loc1, you won’t be able to move only, say, 30 pcs of item A from there to another location, as that would lead to split of the existing work line to 30 and 70, as the locations are now different.
  • For Staging scenarios, where the license plate we move the goods from, or the license plate we move the goods to, are set as a Target LP for a work order, only movement of the entire LP is allowed, so as not to break up the Target LP.
  • Only the ad hoc movement is currently supported. That means you will not be able to move reserved inventory through the movement by template mobile device menu items.

For those behind on updates :)

This feature has also been back-ported to Microsoft Dynamics AX 2012 R3 and will be available as part of CU12.
It can also be downloaded individually through KB number 3192548



This is great stuff, give it a try and let us know if you have any feedback!

Thanks