Quantcast
Channel: dynamics ax – Goshoom.NET Dev Blog
Viewing all 117 articles
Browse latest View live

Test Data Transfer Tool: Getting errors from log

$
0
0

I imported data to AX with the Test Data Transfer Tool and it told me that some errors occured. The log file is quite large, so I asked myself what’s the easiest way to find these errors. This is my approach, using a very simple Powershell script:

[xml]$dplog = Get-Content C:\Windows\System32\dplog.xml
$dplog.root.item | ? Status -eq "Failed"

Note that this is for Powershell 3; you would have to change it to something like this if you still use Powershell 2:

[xml]$dplog = Get-Content C:\Windows\System32\dplog.xml
$dplog.root.item | ? {$_.Status -eq "Failed"}

This is how the output looked like in my case:

status      : Failed
message     : One or more indexes were disabled on table TableXYZ to allow the data to import.
              Use the following SQL to enable the indexes once you've fixed the data:
                  ALTER INDEX ALL ON [TableXYZ] REBUILD
              The original index violation message is:
              Cannot insert duplicate key row in object 'dbo.TableXYZ' with unique index
              'I_104274XYZIDX'. The duplicate key value is (5637144576, 196, , ).
              The statement has been terminated.
direction   : Import
action      : Overwrite
database    : TestAX
table       : TableXYZ
targetTable : TableXYZ
folder      : C:\TestData

Customer Experience Improvement Program dialog

$
0
0

If you create a script that runs AX client, e.g. to compile CIL, you might find that it gets stuck immediately after starting AX. It’s typically because your build user is asked to join the Customer Experience Improvement Program. One option is to log as the build user and choose yes or no. But if the account doesn’t have permissions for interactive login, or you simply look for an easier way, you can set it directly in SysUserInfo.SqmEnabled field.

I remembered this problem, but I couldn’t remember at all where the options is saved. From now, I can always find it here. :)

Custom date and time format

$
0
0

I was extending a customization of Dynamics AX when I ran into the following piece of code. It formats the current date and time to something like 20150525_0042.

str dateValue, dateFormat;
 
dateValue   = date2str( systemDateGet(),
                        321,
                        DateDay::Digits2,
                        DateSeparator::None,
                        DateMonth::Digits2,
                        DateSeparator::None,
                        DateYear::Digits4,
                        DateFlags::FormatAll);
 
dateFormat = strFmt(    "%1%2%3",
                        dateValue, '_',
                        subStr( strRem(time2Str(timeNow(), TimeSeparator::Space, TimeFormat::Auto), ' '),
                                0,
                                4));

If I haven’t formatted the code to make it more readable, you would struggle to follow what it does. Even the original developer had the same problem – I already fixed two bugs (!) in this code snippet.

I rather dropped the code completely and replaced it with this:

str formatted = System.String::Format(
                                '{0:yyyyMMdd_HHmm}',
                                DateTimeUtil::newDateTime(systemDateGet(), timeNow()));

Better, isn’t it? It’s not only shorter, more importantly it’s much easier to understand and maintain. It would be even simpler if I used DateTimeUtil::utcNow() instead of keeping the original logic with systemDateGet() and timeNow().

This is just a simple example of how .NET Interop from X++ can make your life easier – the amount of .NET code available for you is huge. In this particular case, I called String.Format() method with a custom date and time format. You can also use custom formats when parsing strings to dates (DateTime.ParseExact()), which is probably even more useful.

Talk in Prague – Dynamics Technical Conference: Direction of AX

$
0
0

I apologize, but if you’re unfortunately enough not to speak Czech and not to live near to Prague, you probably can stop reading. Hopefully you’ll find something more useful here next time.

Others please note that I’m giving a talk about Dynamics AX in Prague on Tuesday, 24 February 2015.

On the beginning on February, another Microsoft Dynamics Technical Conference took place in Seattle, this time with more than 120 session about Dynamics AX and Dynamics CRM. In Prague, I would like to share some interesting information from this conference (e.g. about AX 7 and Lifecycle Services) and try to sketch what impact will the forthcoming changes have on usage and development of Dynamics AX in future.

The event is being held in the private room of pizzeria Kmotra, from 16:30. After the more official part, we’ll continue with informal discussion about Dynamics AX and anything related.

If you’re going to attend, please register here. Information about approximate number of attendees will help me to prepare the event. Thank you in advance.

Refactoring of financial reasons

$
0
0

If you’re not sure what “refactoring” means, note that it is a change of the structure of code without changing how the application behaves. It might sound useless, but it’s actually very important. You must realize that the quality of code doesn’t depend only on its runtime behavior – whether you can easily maintain and extend the code is equally important.

I tend to do refactoring and implementation of new features as two distinct activities, therefore I always know whether I’m trying to change the behavior or keep it the same, and I can test the code accordingly. If you’re lucky enough to have automated tests, they will help you to check whether your refactoring didn’t accidentally change the behavior.

I often spend more time refactoring existing code than implementing something new. That doesn’t mean that I’m wasting time. It often means that the change itself is a single line of code – as soon as the code is made easily extensible.

I work on improving and extending existing code bases very often, therefore I could show many typically problems and refactorings, nevertheless I obviously don’t want to discuss my clients’ code publicly. What pushed me to write this blog post was an interesting refactoring of a class from the standard AX application.

My goal was adding a new module to financial reasons in AX 2012:

Financial reasons

I thought it would be pretty easy, because it looks like a type of change that original developers could expect and prepare for. Unfortunately the implementation makes this kind of change quite difficult and error prone.

Let’s take one specific method as an example. This is validateDelete() method of ReasonFormTable class, which is called from Reasons form:

public boolean validateDelete(ReasonTable _reasonTable)
{
    boolean ret = true;
 
    switch(reasonCodeAccountType)
    {
        case ReasonCodeAccountTypeAll::FixedAssets:
 
            // Validation fails if any fields are checked except the fixed asset field.
            // <GEERU>
            if (_reasonTable.Bank || _reasonTable.Cust || _reasonTable.Ledger || _reasonTable.Vend || _reasonTable.rCash || _reasonTable.rAsset)
            // </GEERU>
            {
                ret = false;
            }
            break;
        case ReasonCodeAccountTypeAll::Bank:
 
            // Validation fails if any fields are checked except the bank field.
            // <GEERU>
            if (_reasonTable.Asset || _reasonTable.Cust || _reasonTable.Ledger || _reasonTable.Vend || _reasonTable.rCash || _reasonTable.rAsset)
            // </GEERU>
            {
                ret = false;
            }
            break;

I’m showing just first two cases, but there are eight in total, all with almost identical code. The duplicity itself tells me that the code has a couple of problems. If you want to add a new module, you’ll have to change all existing cases. You’ll have to change a lot of code in many places, which means many opportunities for errors. And it’s not very readable – it doesn’t express the intention (“check if the reason isn’t used in other modules”) very well and it would be difficult to spot the error if you incidentally used a wrong field somewhere.

You probably noticed that the developer implementing Russian functionality must have already modified every case. He didn’t refactor the solution to something better, he blindly followed the existing pattern. But it doesn’t mean that the problem remained exactly the same as before – it actually become much worse, because the number of modules to handle increased from five to seven.

Instead of changing a few dozen places in the class, the developer should have refactored the solution to be more extensible and then modified just a few places. That’s the approach that all developers should follow – you can start with a simple solution, but when it becomes causing problems, don’t wait and refactor it. It will help not only with the current task, but also with all later extensions and bug fixes.

Let’s look at my refactoring. First of all, I realized that I would need some mapping between modules and table fields. I used a similar switch as in the original code (although I had more options, such as using a map); the difference is that it’s now encapsulated in a separate method that doesn’t do anything else. It can be used from many places and if I add a new module, I’ll know where to go and simply add one more scenario there.

private FieldId typeToFieldId(ReasonCodeAccountTypeAll _type)
{
    switch (_type)
    {
        case ReasonCodeAccountTypeAll::FixedAssets:
            return fieldNum(ReasonTable, Asset);
 
        case ReasonCodeAccountTypeAll::Bank:
            return fieldNum(ReasonTable, Bank);
 
        case ReasonCodeAccountTypeAll::Cust:
            return fieldNum(ReasonTable, Cust);
 
        case ReasonCodeAccountTypeAll::Ledger:
            return fieldNum(ReasonTable, Ledger);
 
        case ReasonCodeAccountTypeAll::Vend:
            return fieldNum(ReasonTable, Vend);
 
        case ReasonCodeAccountTypeAll::RCash:
            return fieldNum(ReasonTable, rCash);
 
        case ReasonCodeAccountTypeAll::RAsset:
            return fieldNum(ReasonTable, rAsset);
 
        default:
            return 0;
    }
}

To be able to work with “all module fields except a specific one”, I need a list of all fields representing module check boxes. I could define it in code, but the application already contains such a definition – in a field group on the table. Therefore I can use reflection to read the list of fields from there (and I cache it to a member variable, to avoid running the same code too often).

private Set accountFields()
{
    DictFieldGroup fieldGroup;
    int            i;
 
    if (!accountFields)
    {
        accountFields = new Set(Types::Integer);
        fieldGroup = new DictFieldGroup(tableNum(ReasonTable), tableFieldGroupStr(ReasonTable, AccountType));
 
        for (i = 1; i <= fieldGroup.numberOfFields(); i++)
        {
            accountFields.add(fieldGroup.field(i));
        }
    }
 
    return accountFields;
}

Now I have everything what I need to iterate over module fields and check their values:

public boolean isUsedForAnotherModule(ReasonTable _reasonTable)
{
    FieldId       fieldToCheck;
    FieldId       acceptedField = this.typeToFieldId(reasonCodeAccountType);
    SetEnumerator enumerator    = this.accountFields().getEnumerator();
 
    while (enumerator.moveNext())
    {
        fieldToCheck = enumerator.current();
 
        if (fieldToCheck != acceptedField && _reasonTable.(fieldToCheck) == NoYes::Yes)
        {
            return true;
        }
    }
 
    return false;
}

It should be easy to follow. The acceptedField variable contains the field used for the current module (I used my new method for mapping from module type to the field ID). Then it enumerates module fields and check if any field (excluding the accepted one) has the value equal to NoYes::Yes.

Now the part of validateDelete() that checks for other modules can be reduce to a single method call. This is our new implementation:

public boolean validateDelete(ReasonTable _reasonTable)
{
    boolean ret = !this.isUsedForAnotherModule(_reasonTable);
 
    // Do additional check for Letter of Guarantee fields
    …
 
    return ret;
}

Isn’t it better? It’s easy to read, because you simply read the name of the method and don’t have to deal with dozen lines of code. And if you need to add a new module, you don’t have to change validateDelete() in any way.

I almost hear some developers refusing to start such a refactoring, saying that it takes too much time or it would be too much code. I strongly disagree. It might have taken more time than the mindless copy & paste, but it was still just a few minutes – that’s really nothing to worry about. And it will save time whenever we need to extend or fix the code.

Regarding the length of the code – the solution is already significantly shorter than the original method and the real saving would start only from now. For example, look at datasourceInitValue():

public void datasourceInitValue(ReasonTable _reasonTable)
{
    switch(reasonCodeAccountType)
    {
        case ReasonCodeAccountTypeAll::FixedAssets:
            // default the asset account type
            _reasonTable.Asset = true;
            break;
        case ReasonCodeAccountTypeAll::Bank:
            // default the bank account type
            _reasonTable.Bank = true;
            break;
 
    // … and the same for other modules …
}

You see that there is another instance of the switch statement mapping account types to fields. Because we already extracted the logic to a separate method, we can reuse it and replace the whole method with this:

public void datasourceInitValue(ReasonTable _reasonTable)
{
    if (reasonCodeAccountType != ReasonCodeAccountTypeAll::AllValues)
    {
        _reasonTable.(this.typeToFieldId(reasonCodeAccountType)) = true;
    }
}

Notice that I threw away almost all code of datasourceInitValue() and that’s nothing unusual when refactoring not-so-good code. I often wonder how much time it took to develop several hundred lines of code that I’m removing, because they aren’t actually needed. It’s not refactoring what leads into a lot of code – it’s the lack of refactoring. And having more code than actually needed is bad, because it means more code to read, more code to maintain, more places where bugs can hide and so on.

I didn’t invent refactoring, DRY, SOLID or anything like that; it’s all known for years and you can find many books about these things. But I see that many developers are still not aware of these principles and techniques, therefore it’s important to repeat them. Hopefully seeing real code will help some developers who are willing to improve themselves but who aren’t fans of abstract definitions.

AX management shell – multiple models

$
0
0

Even if you’re not familiar with PowerShell, you probably don’t have much trouble with using AX management cmdlets. You just call the right command, set some parameters and it’s done.

# Just for example
Export-AXModel -Model "MyModel" -File "c:\my.axmodel"

If you don’t know the parameters, you’ll simply use Get-Help or some of its aliases, such as man:

man Export-AXModel -Detailed

But if you don’t know PowerShell well, you may not know how, say, to work with multiple models at once. This is often useful, therefore it’s worth showing how to do it.

Before we look at any code, ensure yourself that you have a development environment such as PowerShell ISE or PowerGUI. These applications will help you saving your scripts, discovering commands and parameters, setting breakpoints, reviewing variables and many other things. No need to do everything directly in console!

But there is a problem – if you try to use AX cmdlets outside AX management shell, they won’t be recognized. Fortunately the fix is easy – add the following line to your script (including the dot at the beginning):

. 'C:\Program Files\Microsoft Dynamics AX\60\ManagementUtilities\Microsoft.Dynamics.ManagementUtilities.ps1'

You can also run it directly in console or even to add it to your profile, so it’s loaded automatically every time you run PowerShell.

Let’s warm up with a few simple commands (using PowerShell 3.0 syntax):

# Lists all AX modules - nothing new here
Get-AXModel
 
# Lists just model names and layers, to make things more readable
Get-AXModel | select Name, Layer
 
# Lists models in a specific layer
Get-AXModel | ? Layer -eq USR

The “| ” symbol is a pipe and it passes the output of one command to the input of another command. It would be great if could use it with AX cmdlets. e.g. for passing a list of models to Export-AXModel. Unfortunately it’s not supported and AX cmdlets also don’t accept arrays as arguments, therefore we have to call each command for a single model at a time. It’s not that bad as it might sound, because we can use a loop to sequentially execute a command for every element in an array.

# Let's set some parameters first 
$toDir = 'C:\Models'
$layer = 'USR'
 
# Export all models in a given layer to file
Get-AXModel | ? Layer -eq $layer | % { Export-AXModel -Model $_.Name -File "$toDir\$($_.Name).axmodel" }
 
# Uninstall all modules from a given layer
Get-AXModel | ? Layer -eq $layer | % { Uninstall-AXModel -Model $_.Name -NoPrompt }

It might look a little bit cryptic at first, but you see it’s rather simple and you shouldn’t have problems to apply the same approach in your own scripts. It’s much faster to write (or copy) something like this than typing, running and waiting for several individual commands.

Connection to external database

$
0
0

You sometimes need to connect from Dynamics AX to another database and read or write data there. The usual approach uses OdbcConnection class, as described on MSDN in How to: Connect to an External Database from X++ Code [AX 2012]. Although it definitely is a possible solution (I used it already in AX 3.0), it has so many problems that I almost never use it these days.

My intention is to show one of possible alternatives that solves many of these issues. Although the core idea can be used in AX from version 4.0, this blog post uses AX 2012 and depends on several features that don’t exist in older version.

First of all, look at how you’re supposed to construct the query for OdbcConnection:

sql = "SELECT * FROM MYTABLE WHERE FIELD = "
            + criteria
            + " ORDER BY FIELD1, FIELD2 ASC ;";

and how to get results:

print resultSet.getString(1);
print resultSet.getString(3);

(This is taken from the sample code on MSDN.)

You have to define the query as a simple string, therefore you have to write SQL code by yourself, you won’t get any IntelliSense, compile-type control or anything like that. You have to execute your SQL against the target database to find errors such as misspelled field names.

You also depend on column indexes when accessing data, which becomes cumbersome very quickly and it’s easy to break. For example, notice that the query above selects all fields. What will happen if I add another field, e.g. the query starts returning ANewField, Field1 and Field2 instead of Field1 and Field2? Column indexes now refers to different fields and it again won’t be found until actually running the code.

It’s simply unpleasant for development and horrible for maintenance.

Wouldn’t it be better if we could simply refer to field as if we do in normal X++ development, get compile-type control and so on? Good news – we can!

Ssms_Table

Before we actually start, let me show you the table that will simulate my external data. It has four fields: ID and Name that I’m going to import to AX, Status used for tracking what needs to be processed and UpdateTime storing the data and time of the update.

It can be created by the following script:

CREATE TABLE [DataForAX](
    [ID] [int] IDENTITY(1,1) NOT NULL,
    [Name] [nvarchar](30) NOT NULL,
    [Status] [tinyint] NOT NULL,
    [UpdateTime] [datetime] NULL,
    CONSTRAINT [PK_DataForAX] PRIMARY KEY CLUSTERED 
    (
        [ID] ASC
    )
)

You can also put some test data there:

INSERT INTO DataForAX
VALUES 	(N'Croup', 0, null),
	(N'Vandemar', 0, null)

Note that it’s important for the table to have a primary key, otherwise we wouldn’t be able to update the data.

AxObjects

Let’s also prepare some objects in AX. We’ll need a table storing the data and because the Status field is an integer representing various states, we can very naturally model it as an X++ enum.

Status field values:

  • 0 = Ready
  • 1 = Imported
  • 2 = Failed

Now start Visual Studio, because most of our work will happen there. Create a new C# class library, add it to AOT and drag the table and the enum from Application Explorer to your project. We’ll need them later.

Open Server Explorer (View > Server Explorer) and create a connection to your database. I’m using a SQL Server database on the current machine, but you can use ODBC, connect to remote servers or anything you need.

ServerExplorerAddConnection

ChooseDataSource

AddConnection

Now you should see your tables directly in Server Explorer in Visual Studio:

ServerExplorerTable

Right-click your project in Solution Explorer, choose Add > New Item… and find LINQ to SQL Classes. Set the name (TestDB) and click Add.

AddLinqClass

A database diagram will be added to your project and its design surface will open. Drag the table from Server Explorer to the design surface.

DesignSurface

It will not only render the table in the designer, it will also generate a class representing the table (and a few other things needed for LINQ to SQL). If you want, you can open TestDB.designer.cs and verify that there is a class for DataForAX table:

public partial class DataForAX : INotifyPropertyChanging, INotifyPropertyChanged
{
    private int _ID;
    private string _Name;
    private byte _Status;
    private System.Nullable _UpdateTime;
 
    …
}

It’s a more advanced topic, but notice that it’s a partial class. It allows you to extend the class without meddling with code generated by the designer. Extending the class is useful in many scenarios, but we don’t need it today.

There is one thing we should change, though – the type of the Status field is byte while would like to use our enum from AX. Go to the designer, open properties of the Status field and type the name of the enum (ExtRecordStatus) into the Type field.

StatusFieldProperties

MapToEnum

It might look as nothing special, but I think it’s pretty cool. We’re mapping a field from an external database to an X++ enum (albeit through a proxy) and we’ll be able to use in LINQ queries, when assigning values to the field and so on.

Now add a new class to your project, call it Data, for example, make it public and add the Import() method as shown below:

public class Data
{
    public void Import()
    {
        TestDBDataContext db = new TestDBDataContext();
 
        var query = from d in db.DataForAXes
                    where d.Status == ExtRecordStatus.Ready
                    select d;
 
        foreach (DataForAX record in query)
        {
            ExtTable axTable = new ExtTable()
            {
                ID = record.ID,
                Name = record.Name
            };
 
            if (axTable.ValidateWrite())
            {
                axTable.Insert();
                record.Status = ExtRecordStatus.Imported;
            }
            else
            {
                record.Status = ExtRecordStatus.Failed;
            }
 
            record.UpdateTime = DateTime.UtcNow;
        }
 
        db.SubmitChanges();
    }
}

Let me explain it a little bit. Firstly we create a data context:

TestDBDataContext db = new TestDBDataContext();

It’s a class that defines where to connect, track changes and so on. In real implementations, I usually store DB server and DB name in AX database, create a connection string from them (with the help of a connection string builder) and pass the connection string to data context’s constructor.

Then we create the LINQ query.

var query = from d in db.DataForAXes
            where d.Status == ExtRecordStatus.Ready
            select d;

This simple example can’t show the full power of LINQ, but it’s obvious that we’re not building SQL by ourselves; we use strongly-typed objects instead. Notice also how we filter the data by the X++ enum.

The foreach loop fetches data from database and fill it to an instance of DataForAX class. Later we simply access fields by name – nothing like resultSet.getString(3) anymore.

This code:

ExtTable axTable = new ExtTable()
{
    ID = record.ID,
    Name = record.Name
};
 
if (axTable.ValidateWrite())
{
    axTable.Insert();
}

sets data to an AX table buffer (ExtTable is the table we created in AX at the very beginning) and calls its validateWrite() and insert() methods as usual. I typically pass records back into AX and process them there, but inserting them to AX database from here works equally well.

Then the code changes values of Status and UpdateTime fields. Finally db.SubmitChanges() writes changed data to the external database. We could also insert or delete records in the external database if needed.

That completes our library; the remaining step is calling it from AX. Open project properties and choose where to deploy the library. We’ll run it from a job for demonstration, therefore we have to deploy it to the client.

DeployToClient

Rebuild the project and restart your AX client, if it was running.

Create a job in AX and call the Import() method from there:

try
{
    new ExtSystem.Data().Import();
}
catch (Exception::CLRError)
{
    throw error(AifUtil::getClrErrorMessage());
}

That’s all what we need to do here, because reading the data, inserting them to AX database and updating the external database is already handled by the .NET library.

The whole application has just a few lines of code, doesn’t contain any strings with SQL commands, it doesn’t depend on the order of columns, the compiler checks whether we use data types correctly and so on.

It may require to learn a few new things, but I strongly believe that AX developers should look more at how things are done these days instead of sticking to old ways and low-level APIs such as the OdbcConnection class. Many great frameworks and language features waits for you to use them to increase your productivity and develop things hardly achievable in X++ – and AX 2012 made it easier than ever. It’s would be a pity not to benefit from all the progress done in recent years.

I couldn’t really discuss LINQ or the database designer while keeping a sensible length of this post. Fortunately you’ll find a plenty of resources about these things. LINQ to SQL also isn’t the only way – maybe ADO.NET would be more suitable for your particular project. Nevertheless I hope I managed to show you how to use these frameworks from AX – and that it’s not difficult at all.

Connection string for LINQ to SQL

$
0
0

In my article about connecting to external databases with the help of LINQ to SQL, I already mentioned how to parametrize which database to use, because you’ll likely use different databases for development, test and live environments.

I wrote: In real implementations, I usually store DB server and DB name in AX database, create a connection string from them (with the help of a connection string builder) and pass the connection string to data context’s constructor.

Let’s look at what it means in practice.

First of all, let me remind you the .NET class:

public class Data
{
    public void Import()
    {
        TestDBDataContext db = new TestDBDataContext("); 
        …
    }
}

Here I create the data context without any parameters, therefore it connects to the database that I used when designing the library in Visual Studio. It’s also possible to change it in the designer.

A common way for specifying connection details is using connection strings like this:

Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=True

This approach is supported even by LINQ to SQL and you could simply pass a connection string to the constructor of a data context class:

new TestDBDataContext("Data Source=.;Initial Catalog=TestDB;Integrated Security=True");

I could construct such a string by myself, but I’m not interested in details of the syntax, in concatenating strings and so on. A connection builder will deal with it for me. Therefore I’ll add a field to the Data class holding the connection builder instance:

SqlConnectionStringBuilder connStringBuilder;

I also have to add

using System.Data.SqlClient;

on the top of the file, because that’s the namespace containing SqlConnectionStringBuilder class.

I have to decide what I want to include in my connection string and usually a database server name and a database name are enough, therefore these two values will be my parameters. I add a constructor to the Data class that accepts these parameters, creates a connection string builder and set its properties:

public Data(string dbServer, string dbName)
{
    //TODO: parameter validation
 
    connStringBuilder = new SqlConnectionStringBuilder();
    connStringBuilder.DataSource = dbServer;
    connStringBuilder.InitialCatalog = dbName;
    connStringBuilder.IntegratedSecurity = true;
}

Now we can get the connection string at any time from ConnectionString property of the builder, therefore let’s pass it to the data context:

new TestDBDataContext(connStringBuilder.ConnectionString);

Put all together, the class now looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.SqlClient;
 
namespace ExtSystem
{
    public class Data
    {
        SqlConnectionStringBuilder connStringBuilder;
 
        public Data(string dbServer, string dbName)
        {
            //TODO: parameter validation
 
            connStringBuilder = new SqlConnectionStringBuilder();
            connStringBuilder.DataSource = dbServer;
            connStringBuilder.InitialCatalog = dbName;
            connStringBuilder.IntegratedSecurity = true;
        }
 
        public void Import()
        {
            TestDBDataContext db = new TestDBDataContext(connStringBuilder.ConnectionString);}
    }
}

Rebuild the solution and open Dynamics AX (restart the client if it was running).

Our test job in AX contained the following code:

new ExtSystem.Data().Import();

It doesn’t compile anymore, because we have to specify the arguments (database server + database name). The following implementation connects to exactly the same database as before.

new ExtSystem.Data('.', 'TestDB').Import();

I can easily prove that the parameters are utilized by specifying an invalid database name such as WrongDB. I’ll get an error saying “Cannot open database “WrongDB” requested by the login”.

Now the only remaining thing is to create fields in AX database for the parameters and use them when creating the Data class.

Also note that if I wanted, I could select the data from AX database directly in the .NET class (by creating a proxy to the table etc.). But I think that it would be an unnecessary coupling between the library and the table. Using parameters is more flexible and it also make testing easier.


The value 1 is not found in the map

$
0
0

We were getting the error, “The value 1 is not found in the map”, in one of our rather complex processes in AX 2012. It occurred only sometimes, which made debugging difficult, but finally I managed to track it down. The cause lies in something that many people use, therefore it’s possible that you will, or already did, run into the same problem.

I found the issue in AX 2012 R2 CU7 and reproduce it also in AX 2012 R3.

The problem can be demonstrated by running the following piece of code. It creates a progress bar, waits for some time and calls incCount() to update the progress. The last call to incCount() throws the error.

SysOperationProgress progress;
#AviFiles
 
progress = SysOperationProgress::construct();
progress.setAnimation(#AviUpdate);
progress.setCaption("Some caption");
 
sleep((progress.updateInterval() + 1) * 1000);
 
// The progress form gets initialized here.
progress.incCount();
 
// Wait for more than 10 seconds
sleep(11000);
 
progress.incCount();

You see that there is nothing suspicious going on – I just simulate a process that runs for some time and occasionally updates the progress. I’ll explain the important of waiting a little bit later.

If you debug the code, you’ll find that the error is thrown at line 33 of SysOperationProgressBase.updateTime(). It’s this line:

r3 = totalValue.lookup(1)-progress;

It should be clear what happens in this code – it tries to get the value for the key 1 from totalValue map but it doesn’t exist (→ The value 1 is not found in the map).

Why the map doesn’t contain the key? It occurs that the initialization is done in setTotal(), which isn’t called in the code above. Therefore if you always call setTotal() when creating SysOperationProgress, you won’t ever get into this problem. Nevertheless you have to do it even if you actually don’t know the total – then you have to set it to zero:

progress.setTotal(0);

It’s also worth noting that setTotal() gets called if you create the instance by SysOperationProgress::newGeneral(), which developers often do anyway.

Nevertheless why the error occurs only sometimes? The SysOperationProgressBase class usually checks whether the key exists in the map before trying to use it. The exception is – obviously – in updateTime() and several conditions must be met before this piece of code gets called. The most tricky one is this:

if (time-laptime > #tenSeconds)

It must be ten seconds since the last update and if it took less, this code doesn’t get executed and no error is thrown. Therefore we got this error as a side effect of something slowing the system down. Now it’s also clear why my script waits for more than ten seconds before calling incCount() again.

Always calling setTotal() is a relatively easy workaround, but it would be better if SysOperationProgress handled the problem in a better way. It should either accept that the value 1 doesn’t have to exist in totalValue (and check if it’s there before using it) or it should throw an error to inform the developer that he’s trying to use an object that hasn’t been properly initialized.

This blog post doesn’t suggest that The value 1 is not found in the map is always caused by a progress bar – it may refer to any other map. Nevertheless it’s one of things that you may want to check.

Index for delete actions

$
0
0

When using delete actions in Dynamics AX, don’t forget that AX will have to look into database to know if there is any related record. And if there is no index, the performance can be really bad.

I’ve just run into such a case, when deleting a customer executed a query running for six seconds. SQL tracing in AX showed me the following query and call stack:

SELECT T1.RECID FROM SOME_CUSTOM_TABLE T1
WHERE (((PARTITION=?) AND (DATAAREAID=?)) AND (CUSTACCOUNT=?))
(C)\Classes\xRecord\doValidateDelete
(C)\Classes\xRecord\validateDelete
(C)\Data Dictionary\Tables\CustTable\Methods\validateDelete - line 8
(C)\Classes\FormDataSource\validateDelete
(C)\Classes\FormDataSource\delete
(C)\Forms\CustTable\Data Sources\CustTable\Methods\delete - line 8

Although CustTable has overridden validateDelete(), the code at line 8 is just a call of super(). Therefore it’s all handled by AX kernel and the obvious explanation is a delete action. As expected, there was a delete action from CustTable, a relation on AccountNum and no corresponding index on the custom table.

Every time you create a delete action, think about how AX will look for the record and whether an index isn’t needed.

Join first line in AX 2012

$
0
0

You sometimes need to join only one related record for each parent record, for example one order line for each order header. I’m always suspicious of such requirements, because it often means that users/analysts didn’t realize that there may be more than one record, or they didn’t think enough of identifying the one record they actually need. Nevertheless there are valid scenarios when showing only the first or the last record is exactly what’s required.

If you wrote it in T-SQL, you could use a subquery, but how to achieve the same in AX? If you read my blog post about Subqueries in views, you already know that computed columns can be for building subqueries. Let’s apply the same approach to this problem.

In my example, I’ll create a form showing information from sales lines and from the last shipment for each sales line. It’s based on a real implementation, nevertheless it’s simplified so we don’t have to deal with too complex computed column or form. (Also please don’t forget that computed columns are not supported in AX 2009 and older versions).

View

First of all, create a view and call it SalesLineLastShipmentView. Add SalesLine datasource and drag InventTransId field from the datasource to the Fields node.

ViewWithInventTransId

InventTransId is the primary key of the SalesLine table and we’ll later use it for a join with SalesLine. The view will also contain ShipmentId (the primary key of WMSShipment table), which will be used for a join as well. To get the ID of the last shipment, first create a method on the view and put the following code there:

public static server str lastShipmentId()
{
    SysDictTable orderTransDt = new SysDictTable(tableNum(WMSOrderTrans));
 
    SysDictTable productCategoryDt = new SysDictTable(tableNum(EcoResProductCategory));
    DictView dv = new DictView(tableNum(SalesLineLastShipmentView));
 
    str s = strFmt('SELECT TOP 1 %1 FROM %2 WHERE %2.%3 = %4 ORDER BY %5 DESC',
        orderTransDt.fieldName(fieldNum(WMSOrderTrans, ShipmentId), DbBackend::Sql),
        orderTransDt.name(DbBackend::Sql),
        orderTransDt.fieldName(fieldNum(WMSOrderTrans, InventTransId), DbBackend::Sql),
        dv.computedColumnString(tableStr(SalesLine), fieldStr(SalesLine, InventTransId), FieldNameGenerationMode::WhereClause),
        orderTransDt.fieldName(fieldNum(WMSOrderTrans, DlvDate), DbBackend::Sql));
 
    return strFmt('ISNULL((%1), \'\')', s);
}

If it’s not clear what it does, wait a moment until we look at the resulting SQL query; it will be much more readable.

Now right-click the Fields node of the view and choose New > String Computed Column. Open properties of the column and set them as follows:

  • Name = ShipmentId
  • ExtendedDataType = WMSShipmentId
  • ViewMethod = lastShipmentId

Properties

Save the view, open it in table browser if you have some shipments in your system, you should see them mapped to InventTransIds.

ViewInTableBrowser

If you’re curious, open SQL Server Management Studio, find views in your AX database, right-click SalesLineLastShipmentView view and choose Design. It opens a view designer and you can find the actual SQL query there. This extremely useful especially if your view doesn’t work as expected and you want to see how exactly was translated to SQL.

This is the definition of our view:

SELECT INVENTTRANSID, DATAAREAID, PARTITION, RECID, CAST(ISNULL
	((SELECT TOP (1) SHIPMENTID
		 FROM dbo.WMSORDERTRANS
		 WHERE (INVENTTRANSID = T1.INVENTTRANSID)
		 ORDER BY DLVDATE DESC), '') AS NVARCHAR(10)) AS SHIPMENTID
FROM dbo.SALESLINE AS T1

You should recognize the subquery defined in our view method. It finds the first only (TOP (1)) WMSOrderTrans record for each SalesLine and return the ShipmentId. If none is find, it returns an empty string instead of NULL (because AX doesn’t expect NULL values in this case). That’s handled by ISNULL() function.

Note that you should always use ORDER BY if you look for first or last records, because otherwise the order is not defined and a different row may be returned every time you execute the query.

Form

Now we have a mapping between InventTransId and ShipmentId and because we want to see more fields in a form, we have to join tables to our view and build a form showing the fields.

Create a new form and add three datasources:

  • SalesLine (the root datasource)
  • SalesLineLastShipmentView (inner-joined to SalesLine)
  • WMSShipment (joined SalesLineLastShipmentView with outer join, because some sales lines have no shipment at all)

Because we don’t have any relation between SalesLine and SalesLineLastShipmentView, we have to add it in code. Override init() method of SalesLineLastShipmentView datasource and add the following code below super():

this.queryBuildDataSource().addLink(fieldNum(SalesLine, InventTransId),
                                    fieldNum(SalesLineLastShipmentView, InventTransId));

We don’t need the same procedure for WMSShipment, because it’s already handled by the table relation on WMSShipmentID data type.

The last thing is to create a form design. To keep is simple, create a grid and drag some fields there from SalesLine and WMSShipment.

FormWithData

Summary

Views are powerful even without computed columns. For example, if you simply wanted the highest shipment ID for each line, you could build a view like this (and join it with SalesLine and WMSShipment tables):

SELECT InventTransId, MAX(ShipmentId) AS ShipmentId FROM WMSOrderTrans
    GROUP BY InventTransId

Nevertheless computed columns add another dimension to views and allow you to use many more features of T-SQL. This ultimately allows you to implement business requirements that would be difficult to achieve by other means. Unfortunately the syntax for defining computed is far from perfect – it’s not easy to write nor read and you get little compile-time control. Here definitely is room for improvement. But this is definitely no argument for giving up all the power that computed columns offer.

Download

Zipped .xpo with the view and the form (built in AX 2012 R3

Local TF Build + VSO + Dynamics AX

$
0
0

In a recent post, I explained that you may want to use a local Team Foundation Build server with Visual Studio Online. Here I wanted to show how to install and configure it, but then I realized that it’s already covered on a few other places. Therefore I won’t repeat the same thing again and simply give you a link: Configuring on-premises Build server for Visual Studio Online by Anthony Borton.

Now we can use the spared time to talk about what happens after setting up your build server.

First of all, you have to tell the build server what to do. This is done by build templates, which define what actions should be performed and in which order; you can also add conditions, input parameters and so on. You can look at Customize your build process template to understand how to create and use a custom build template.

The problem is that Team Foundation Server knows nothing of Dynamics AX, therefore you have to tell it how to create models, compile CIL and anything else AX-specific you want to do during your build. Fortunately you don’t have to develop all these things by yourself – you can download TF Build activities for Dynamics AX from CodePlex: Dynamics AX Admin Utilities. To learn how to use it, read Easy Automated Builds series on Dynamics AX Musings blog.

The CodePlex project also contains a build template you can use as a starting point for your own process definition. Nevertheless don’t expect that you’ll find a template that will exactly match your process. Modeling what your situation requires is your own responsibility.

It’s important to realize that your local build server doesn’t merely download source code – it’s fully integrated with VSO, therefore you can:

  • Start builds from any client application connected to VSO, including the web portal
  • Review status of running and finished builds
  • Download build output (such as AX model files) from the portal
  • Get build notifications
  • And so on

Nevertheless it doesn’t mean that all clients have exactly the same capabilities. For example, you won’t see all build parameters in the web UI, therefore you may want to use Visual Studio instead in some cases.

If you’ve never seen build results in VSO, here is an example.BuildResultsOnVSONotice links for queuing new builds, downloading logs, artifacts (= output), link to the exact version of source code, issues (compilation errors, warnings, TODOs), associated work items and so on. It gives you a huge amount of options.

This is nothing new for people using Team Foundation Build with on-premise TFS. The point is that you can use exactly the same even with Visual Studio Online.

List AOS services with colors

$
0
0

Here I have a little Powershell script, written mainly for demonstration purposes, nevertheless it does its job and may be useful to somebody.

It connects to any number of servers, finds Dynamics AX AOS services there and shows them in green or red, depending on whether they’re running or not.

$listOfComputers = 'Server1','Server2'
 
$listOfComputers `
    | % {gsv aos* -comp $_ `
        | % {Write-Host $_.DisplayName -f @('Green', 'Red')[$_.Status -ne 'Running']}}

Output:

AOS service colors

Hail Powershell!

Why not to touch AX DB directly

$
0
0

I’ve been asked to reiterate some arguments against direct access to AX business data. (As you all know, the recommended approach is always going through AOS, such us calling web services.)

Here are some of the arguments:

  • You would bypass all business logic. It often means that you would have to re-implement some AX logic (and risking getting out of sync with later changes in AX), or you could miss some important logic completely.
  • You would have to deal with many AX concepts normally handled by AX kernel – partitions, virtual companies, table inheritance, date-effective data and so on.
  • Lots of AX metadata is in AX application, most importantly table relations.
  • You would bypass all AX security.
  • If you wanted to write data into AX database, you would have to deal with record IDs and potentially other system fields.
  • You would bypass data caching and AOS servers could ignore your changes, because nothing told them to invalidate their cache.
  • The exact schema in database is considered an implementation detail (because nobody should touch it) and can change without warning (for example, remember the change in implementation of table inheritance between AX 2012 R1 and R2).
  • It’s not a supported use of Dynamics AX and therefore Microsoft wouldn’t help you if you got into troubles with your database.
  • And so on…

My advice is: if you decide to read data directly from database (likely because of performance reasons), you see you have many things to think about. Be careful, especially regarding security.

If you want to write into AX database, just don’t it. Use AIF, put the data into another database and let AX read them or something. Changing the AX database is simply too risky.

‘AX 7′ Preview Tech Conference – First info

$
0
0

Finally, first pieces of information about Dynamics ‘AX 7′ Preview Technical Conference are getting out. It’s going to happen on 26 – 28 October 2015 in Seattle and you can pre-register here.

I already have my flight tickets – see you all there!


Unit testing in AX – What is it?

$
0
0

I believe that unit testing is extremely important and that the lack of its adoption in Dynamics AX is a serious problem. One reason why people don’t write unit tests is that they don’t really know how. Let’s if I can help a little bit. My intention is neither to write a theoretical work nor a reference guide – I’ll try to explain the fundamentals and show a few examples without going into unnecessary details. The challenge is that although unit testing is trivial in principle, it requires skills and experience to do it right. Please keep it in mind and don’t jump immediately to anything overwhelmingly complex.

First of all, why should we bother with unit tests?

Imagine that you’re developing a complex piece of logic and after every change, you have 50 different cases to test. You either waste a lot of time with tedious testing, or you have a developer mindset and you immediately want to automate it (or you give up testing, but you aren’t that nasty, are you?). If you automate it, you make your development more efficient and you may be finished sooner that if you tested it manually. As a bonus, these tests can be used at any time, perhaps two years later when somebody needs to add a new feature without breaking the original logic. They also work as executable documentation – other developers can see how the code should be used and what test cases must be taken into account.

Now, what exactly do I mean by a unit test? It’s basically a piece of code testing that a unit (typically a method or a class) works as designed. Look at this example:

// Prepare object to test
Number n = new Number(5);
 
// Perform some action
n.Add(1);
 
// Test the result
assetEquals(6, n.value());

As you can see, it’s all about code. This kind of testing is done during development by developers; it’s not a distinct phase performed by a QA department or somebody. These tests are typically written together with the functionality being tested, which also have positive impact on design of code.

As a side note, please realize that every project requires many different types of tests, because no single type can cover everything. For example, you’ll never test all possible path through code and boundary values if you test only through UI. That’s where unit tests excel. On the other hand, that all unit tests pass doesn’t mean that users will find the product useful. That requires different tests.

The fact that we’re testing a single unit is important – with these tests, you want to test a method or a class rather than something like invoicing. Why?

One reason is complexity. If you’re testing a large component consisting of many classes, it has many different states and paths through code and complexity grows exponentially as each class multiplies its number of states (for example) with other classes. The complexity and the number of needed tests get quickly out of hand. The solution is testing smaller parts individually before their complexity grows uncontrollably.

Sure, there may be problems in the integration of classes and you’ll need same tests for it. But these integration tests won’t bother to test all details; instead, they will focus on how the classes communicate. What they do alone has already been tested.

Another reason for testing small units is the ability to quickly locate the problem. Ideally, the failed unit test will tell you which particular method doesn’t work and which assertion failed. If it told you only that something was broken in invoicing, it wouldn’t be very helpful. If you’re often forced to use debugger to find why a unit test failed, your tests don’t fulfil their role well and should be improved.

One more reason is maintainability. If you test a small unit, you need a small number of tests and if the interface changes (e.g. a method is renamed), fixing tests isn’t too much work. If you test a huge component, you need a huge number of tests and maintaining them may be very expensive. It’s important to realize that it wouldn’t be a fault of unit testing – the cause would be poorly designed tests.

In general, unit tests tend to be small and isolated. You don’t want one test method to test many different things (because it wouldn’t be clear what failed), nor you want to test the same code by too many tests (because you would have to fix them if code changes). You also don’t want tests to influence each other, because that could also prevent from you from locating the cause or you would get false positives.

It’s different from how human testers design tests – they often chain tests together rather than trying to isolate them, because starting each test from scratch would present too much overhead to them. You mustn’t forget that various types of tests require various approaches to design.

I hope this gave you some idea about how unit tests are designed and used, despite that it’s necessarily oversimplified. I’m going to write at least one, more practical article with examples of SysTest framework in AX. If you have some questions, please let me know – I’ll try to incorporate answers (if I have some) in the subsequent post(s).

Unit testing in AX – Basics

$
0
0

In the previous post, I talked about what unit testing is and how such tests should be designed. Let’s jump straight into X++ code this time.

Unit tests are methods and we need a class to hold them. Such a container of test methods is called a test case and it’s simply a class extending SysTestCase.

class LedgerJournalTransTest extends SysTestCase
{ }

It can be a bit more complicated, but it doesn’t have to. This is a completely valid test class.

Test methods have stricter rules – they must:

  • be public
  • have no parameters
  • return void
  • be decorated with SysTestMethodAttribute.

The class may contain any number of helper method that don’t follow these rules; they apply to test methods only.

Test methods are typically split into three distinct steps: arrange, act and assert (the AAA pattern). I found it very helpful for keeping tests readable, therefore I’ll follow it here as well.

My first test deals with one scenario in amoundCur2DebCred() method of LedgerJournalTrans table:

[SysTestMethodAttribute]
public void positiveNoCorrection()
{
    // ARRANGE
    LedgerJournalTrans trans;
 
    // Set to non-zero values, because one side should be zeroed
    // and we want to test if it happens.
    trans.AmountCurDebit = 1.234;
    trans.AmountCurCredit = 1.234;
 
    // ACT
    trans.amountCur2DebCred(99);
 
    // ASSERT
    this.assertEquals(99, trans.AmountCurDebit);
    this.assertEquals(0, trans.AmountCurCredit);
}

You can easily see the three phases. I prepare everything needed for the test, run some logic (either returning a value, or, as in this case, changing state of an object) and then verify that the result is what expected.

Now run the test. Right-click the class and choose Add-Ins > Run tests. You should see a unit test toolbar showing that one test ran and succeeded. The toolbar can be also opened from Tools > Unit test > Show toolbar.

TestToobar

Now add one more method:

[SysTestMethodAttribute]
public void negativeNoCorrection()
{
    // ARRANGE
    LedgerJournalTrans trans;
 
    // Set to non-zero values, because one side should be zeroed
    // and we want to test if it happes.
    trans.AmountCurDebit = 1.234;
    trans.AmountCurCredit = 1.234;
 
    // ACT
    trans.amountCur2DebCred(-99);
 
    // ASSERT
    this.assertEquals(0, trans.AmountCurDebit);
    this.assertEquals(99, trans.AmountCurCredit);
}

It works nicely, nevertheless we duplicated code preparing the record for test. We can refactor the code by moving the initialization to a separate method. One solution is using a custom method and calling it on the beginning of each test, but the framework already offers a method for the same purpose: setup(). Let’s use it.

class LedgerJournalTransTest extends SysTestCase
{
    LedgerJournalTrans trans;
}
 
public void setUp()
{
    super();
 
    // Set to non-zero values, because it one side should be zeroed
    // and we want to test if it happes.
    trans.AmountCurDebit = 1.234;
    trans.AmountCurCredit = 1.234;
}
 
[SysTestMethodAttribute]
public void negativeNoCorrection()
{
    // ACT
    trans.amountCur2DebCred(-99);
 
    // ASSERT
    this.assertEquals(0, trans.AmountCurDebit);
    this.assertEquals(99, trans.AmountCurCredit);
}
 
[SysTestMethodAttribute]
public void positiveNoCorrection()
{
    // ACT
    trans.amountCur2DebCred(99);
 
    // ASSERT
    this.assertEquals(99, trans.AmountCurDebit);
    this.assertEquals(0, trans.AmountCurCredit);
}

It works exactly as before and it still has all three steps, we just moved the Arrange part somewhere else. It helps keeping test methods shorter and more readable and allow maintaining initialization code at a single place. On the other hand, it might make less obvious how the system is initialized and why.

All tests above use assertEquals() method, which is a very common assertion, but not the only one available. If you look at other methods named assert*, you’ll find find several others such as assertTrue() and assertNotNull(). Also notice that each of them has an additional parameter for a custom message on failure.

Let me show one scenario that works differently that other assertions and it may be a little bit counterintuitive. I’ll call getAssetCompany() with a transaction invalid for this particular method, therefore it should throw an exception – and I want to test if it really works as designed.

[SysTestMethodAttribute]
public void getAssetCompanyNotAssetType()
{
    trans.AccountType       = LedgerJournalACType::Cust;
    trans.OffsetAccountType = LedgerJournalACType::Bank;
 
    trans.getAssetCompany();
}

Yes, the exception is thrown correctly, but it causes the test to fail! We have to tell the framework that this is actually the correct behaviour. We can achieve that by adding this.parmExceptionExpected(true).

[SysTestMethodAttribute]
public void getAssetCompanyNotAssetType()
{
    trans.AccountType       = LedgerJournalACType::Cust;
    trans.OffsetAccountType = LedgerJournalACType::Bank;
 
    this.parmExceptionExpected(true);
 
    trans.getAssetCompany();
}

Now the framework knows that the test should throw an exception and if it happens, it’s considered successful. If it didn’t throw the error, we would get “Failure: An exception was expected to be thrown”.

As you see, writing unit tests is not necessarily something difficult and time-consuming, as you may have been told. In many cases, it’s the fastest and safest way of development, because you gradually add tests and functionality and every run of tests (which costs you nothing) verifies that all tests cases work as expected. Manual testing would take much longer. And of course, you’ve got regression tests for years ahead.

Of course, these are simple examples and you’re probably thinking of many more complicated situations, especially those when the tested object refers to other objects and database tables. That surely is trickier (and I’ll look at some cases in the next post), but it shouldn’t discourage you. First of all, that you’re not able to write unit tests for all code doesn’t mean you can’t and shouldn’t write them at all. Choose those cases you can handle and leave more complicated scenarios for later, when you have more experience with unit testing. You’ll never cover all code anyway. And as you’ll see, the solution usually isn’t in writing some extremely sophisticated unit tests. It’s in refactoring hard-to-test code to something more friendly to unit testing.

Pub chat about AX and ALM (London)

$
0
0

A friend of mine asked my to have a chat with him about application lifecycle management in Dynamics AX (especially version control, project management and automated build, based on TFS). I think it would be a bit wasteful to do it for a single person, and other people could bring other points of view. Therefore I would like to invite you to join us, if you’re interested in these topics. Nevertheless please don’t expect any formal lecture – I call it “pub chat” for a reason. (That doesn’t mean that I won’t have my “laptop server” with me and won’t show anything in practice.)

I want to keep control over how many people will come (because this type of event wouldn’t work with a large crowd), therefore I’m not going to publish all details here. Just drop me an email (gosh...@goshoom.net), if you’re interested.

Basic information:

Location: London, United Kingdom
Date: Sunday, 20th September 2015
Time: afternoon, not precisely specified yet

DynamicsAxCommunity module 0.3.8

$
0
0

I wrote the DynamicsAxCommunity Powershell module quite a few years ago to help me with a project where I was building and deploying several AX environments across many servers. You still can see some design decisions from this old project, such as what kind of actions I needed and the emphasis on remoting (e.g. I can simply call Start-AXAOS UAT and the module connects to the right remote machine and starts the right service there).

Since then, the module didn’t get any major features, but it’s still maintained and extended as needed. Some modifications were triggered by changed reality (such as the introduction of AxBuild), some by changes in usage. For example, although the module was originally written mainly for automated deployments, I started using it more interactively. I often use Start-AXClient instead of AX configuration files (e.g. Start-AXClient UAT -Layer VAP -Dev), which required a few extra parameters such as -Layer.

There are surely things that could have been designed better (such as some parameter names), but now it’s too dangerous to change them. It could easily break existing scripts.

That being said, the latest build brings (among some fixes) a potentially breaking change, but I believe it can be justified. Without going too deep into details, the module used to ask for credentials before the first call to any remote machine and then tried to cache the credentials and used them for subsequent calls. The fundamental flaw was that it asked for credentials even if the remote machine was configured not to require them. The new implementation never calls Get-Credential and tries to execute the remote command even if you don’t provide any explicit credentials. It’s up to the remote machine to accept or reject the call.

I hope it won’t cause any troubles and it will actually make your life easier.

You can find a brief description of other changes in the change log and you look into source code for details.

Debugging TF Build template for AX

$
0
0

I’m preparing a build of a Dynamics AX 2009 environment using Team Foundation Server 2013 and libraries from Dynamics AX Admin Utilities. Unfortunately the build failed with the following error:

Exception Message: Object reference not set to an instance of an object. (type NullReferenceException)
Exception Stack Trace:    at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

It says that there is a null reference somewhere, but it doesn’t say where, which isn’t very helpful.

I opened the block in my build template which handles exceptions (named If a Compilation Exception Occurred) and added one more action before rethrowing the exception.

Activity: WriteBuildError
Message property: compilationException.ToString()

Just getting a string representation of the exception may not look useful, but it did help. This is the message I got in build output:

System.NullReferenceException: Object reference not set to an instance of an object.
   at CodeCrib.AX2009.Client.Client..ctor()
   at CodeCrib.AX2009.TFS.ImportXPO.BeginExecute(AsyncCodeActivityContext context, AsyncCallback callback, Object state)
   at System.Activities.AsyncCodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

It looks similar, but it’s much more useful. It tells me that the NullReferenceException occurred inside a constructor of Client class in assembly CodeCrib.AX2009.Client. That’s much more specific.

When reviewing the constructor, I noticed it tries to access a registry key (HKEY_CURRENT_USER\Software\Microsoft\Dynamics\5.0\Setup\Components) to find where AX client is installed. I know that these registry keys don’t always exist and I verified that this was the case with my build service too. I created registry keys by running AX configuration utility, started a new build and voilà – there is no NullReferenceException anymore.

Viewing all 117 articles
Browse latest View live