Accessing Dynamics 365 WebAPI as an application user without using ADAL

Published / by AK / Leave a Comment

Microsoft has recently posted the documentation for using Postman with Dynamics 365 WebAPI at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/webapi/use-postman-web-api. This prompts you to login into the application using your credentials to generate a bearer token which is passed to Dynamics 365 WebAPI when a request is made.

However, we, sometimes, need to use an application user to access WebAPI, either for integration testing purpose or for implementing automations. Microsoft has documented server-to-server (S2S) authentication using Azure Active Directory Authentication Libraries (ADAL) at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

Although I am a developer, I strongly believe we can (should) write a program only when a process can be manually done. Therefore, tools like Fiddler, SoapUI, Postman are essential tools to me when it comes to testing WebAPIs.

Now, I will show you how we can generate an access token using an application user in Postman, assuming you have already created an application user. If you need detail steps to create an application user, please follow instructions at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/walkthrough-multi-tenant-server-server-authentication.

In Postman, let’s create a few Postman environment variables.

  • TenantId – to store GUID of Azure Directory ID
  • D365Url – to store the URL of Dynamics 365 instance
  • D365AppId – to store Azure Application Id
  • D365AppSecret – to store Application secret
  • D365Bearer Token – to store generated access token
Environment variables

Environment variables

Then, let’s create a new POST request to https://login.microsoftonline.com/{{TenantId}}/oauth2/token with following data.

Token request

Token request

Now, send a request and it should return an access token (assuming an application user has been correctly setup).

Access Token

Access Token

Now, we need to store the access token to our Postman environment variable called D365BearerToken so that we can re-use it accessing Dynamics 365 WebAPIs. To do this, we need to write a simple script under Tests area.

Writing tests

Writing tests

Pretty easy, right? Now, let’s create another GET request to retrieve the version number of Dynamics 365 instance. The request’s authorization type should be set to ‘Bearer Token’ and its token value should refer to our Postman environment variable D365BearerToken.

Retrieve version

Retrieve version

Now, you can test any WebAPI calls using an application user without relying on ADAL.

What’s next?

Using the same approach, you are not limited to Dynamics 365 ODataQuery in Flow/LogicApps. You can now call any Dynamics 365 Messages using the same approach and perform various tasks.

Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Published / by AK / 1 Comment on Creating a custom connector to upload a file to Google Drive from PowerApps – Part 2

Part 1 can be found here.

Swagger time

To create a customer connector, you can upload either a swagger file or a Postman collection. I am not a swagger expert. But, apistudio makes my life easier. Let’s jump onto http://specgen.apistudio.io.

Paste the URL that you used in Postman. Make sure you are using POST. You should receive 202 status after sending the request. One more thing to remember is to click ‘Next Step‘ button instead of clicking tabs. Otherwise, you will need to rework.

Generate swagger

Generate swagger

Enter API Program and contact info. Ensure to use valid Contact Url and Contact Email. What’s next? Next Step button.

Swagger contact info

Swagger contact info

Next is API Info. You can play around with the slider which will change API Base Path and API Path. Give a meaningful OperationId.

API Info

API Info

There is nothing special to do with Headers Info and Params Info, although we will make some manual changes in them. I don’t see (and I don’t know) to make those changes in the generator.

Keep clicking Next Step until you get Open API Spec. There you can download the swagger file.

Open API Spec

Open API Spec

Now, let’s make a few changes to swagger file.

First, we need to replace null with application/json in “produces” collection, and remove the whole section for “content-type”

Remove content-type

Remove content-type

Modify produces

Modify produces

Set default values for api-versionspsv and sig. You can find these values in URL or using Postman.

Set default values

Set default values

Lastly, you need to add parameters for file content, which will be reading from formData as a file type.

Adding file params

Adding file params

That’s pretty much you need to do in Swagger file before creating a custom connector in PowerApps.

Creating a custom connector

Launch PowerApps in the browser and go to Custom connectors, and import your swagger file.

Create a new custom connector

Create a new custom connector

Create custom connector

Create custom connector

Generally, you don’t need to change anything until you get to 3. Definition. At 2. Security, you can still use ‘No authentication’ as the API will authenticate using sig parameter you have provided in the query string. As long as you don’t expose signature parameter, no authorised user can use your API.

At step 3, scroll to Request section where you will see query parameters. Edit api-versionspsv and sig, and change their visibility to internal since we have set default values in swagger file.

Modify visibility

Modify visibility

After changing visibility of api-versionspsv and sig, you can now create a custom connector.

Once it is created, it is time to head into PowerApps. In your PowerApps, while adding a new data source, you can now find your custom connector.

Custom connector in PowerApps

Custom connector in PowerApps

After creating a data source, let’s add some controls to PowerApps and check whether it’s working. I have added PenInput and Button as below. OnSelect of Button, I am going to push the image from PenInput to my Google Drive.

Sample PowerApps

Sample PowerApps

Viola. It is now uploaded to Google Drive.

PenInput to Google Drive

PenInput to Google Drive

PenInput to Google Drive

PowerApps does not work

You know it right? Save, close and open your PowerApps again.

One last thing

This is only an example of how we can easily (of course, I can know say ‘it is easy’) transform any Flow functionalities as custom connectors. This also solves the concern of using Camera control which produces a low resolution photo. With this approach, we can use AddPicture control not only to take full advantage of device camera but also to pick up photos/files which are already existed in the device.

Generally, we don’t need custom connectors if parameter types are simple. In this example, we need Base64. I guess, depending on the parameter type of an endpoint, PowerApps may be passing suitable parameters.

Anyway, we can now invoke all those infinite possible connectors of Flow from PowerApps.

 

Creating a custom connector to upload a file to Google Drive from PowerApps – Part 1

Published / by AK / 1 Comment on Creating a custom connector to upload a file to Google Drive from PowerApps – Part 1

Knowing the limitation

Although Microsoft Flow and PowerApps support ready-to-use connectors for different services, it seems they do not have the same level of offerings between them.

In Flow, for instance, there is a connector for Google Drive and there is a bunch of actions like creating/deleting a file, browsing the folder.

Google Drive Actions in Flow

Google Drive Actions in Flow

However, you cannot use those actions in PowerApps. Apparently, Google Drive connector in PowerApps only works for excel as a datasource.

Google Drive in PowerApps

Google Drive in PowerApps

Additionally, if you pay attention to different image controls, such as PenInput, Camera, Add Media, they have different properties which looks like they are doing the same thing. For example, Camera.Photo returns a Data URI of a picture, while PenInput.Image and AddMediaButton.Media effectively returns a local blob storage URL (appres://blobmanager/<GUID>/<serial number>) which can only be accessed in the app.

Combining with the inability of using full functionality of connectors and meaningless local blob storage URL outside the app, we need look at what else we can do.

Knowing the out-of-box capability

Given that Camera.Photo returns a Data URI string of an image, we can convert it into binary and save it into Google Drive. However, it has another small issue. Camera control in PowerApps does not use the full resolution of device camera. It is ridiculously small (at least to me). If even we can live with small photos, we still need a workaround for other controls.

Out-of-the-box, you can upload images from above control to Azure Blob Storage. Then, from Azure Blob Storage, you can copy the file to the destination you want. This is not a problem for a client who has already had Azure subscription, but not a neat solution for a client who has no prior Azure subscription. Trust me the cost is never a problem (Azure Blob Storage is super cheap) but it’s more about managing different instances. If managing different instances is not an issue, this is a way to go.

Going an extra mile

This is where a real fun starts.

But, we always want to go an extra mile to see what else there. In the bright side, PowerApps allows us to create a custom connector which can be configured in Flow to upload a file to Google Drive.

First, let’s start with creating a flow which will be triggered when HTTP request is received. There is only one trigger available. This is basically a webhook.

HTTP Request Trigger

HTTP Request Trigger

HTTP Request Trigger

HTTP Request Trigger

Then, initialise the ‘FileName’ variable, so we can save the flow. The expression for value is trigger()[‘outputs’][‘queries’][‘filename’] as we are planning to pass query string filename during the request.

Initialise FileName variable

Initialise FileName variable

After saving the flow, expand the HTTP trigger point to grab the URL to invoke

URL to invoke

URL to invoke

Copy that URL to Postman, and append filename query string in URL. Please note sig query string should be kept in secret.

Test from Postman

Test from Postman

After invoking the URL, you will see the Flow is executed. When you open the execute Flow instance, you will see the filename query string is assigned to FileName variable in Flow.

Test Result

Test Result

Now, let’s try sending the file from Postman and let’s look at the Flow to extract file content. In your Postman, choose File form-data and send the request.

Send the file from Postman

Send the file from Postman

Open the executed flow. This time, you will see ‘click to download’ in the output of HTTP request.

Test Result with file

Test Result with file

When you open it, you will see a massive output, but we are only after for a few things which are body and $content. $content is Base64 of uploaded file.

Multipart body content

Multipart body content

To get this content, add another variable initialiser. This time we are going to set its type as Object, as Google Drive accepts JSON.

Prepare file content

Prepare file content

Now, let’s add another action to upload a file to Google Drive. It will ask you to connect to Google if you have no prior connection.

Google Drive connector

Google Drive connector

Now, let’s choose the folder path in Google Drive, file name and its content. You will need to write an expression for file content to get the object of FileContent variable.

Google Create File action

Google Create File action

Save the Flow. Go back to Postman, and send the request. You will see the flow is executed and a file is uploaded to your Google Drive.

I will cover the process of converting this Flow into a custom connector and calling it directly from PowerApps, in Part 2.

Multiple user roles and entity permissions in Microsoft CRM Portal

Published / by AK / Leave a Comment

Before reading the post, you may would like to read the official document on Microsoft at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/assign-entity-permissions. This will give you a basic understanding of entity permissions in Microsoft CRM Portal.

Your customer wants to build a portal using Microsoft CRM portal. Connection is used for linking their contacts and accounts. Your customer wants to control the permissions of their contacts (who will login to the portal) on their related accounts, based on their connection role. If a contact is assigned as Admin role, he should be able to edit the account record. But, if it is User role, he should have read-only access to Account record.

Using connection to assign role to accounts

Sounds like a familiar request from most customers?

If yes, let’s look at how we can implement this scenario in Microsoft CRM portal.

Continue reading

Microsoft CRM Portal (online) development essentials

Published / by AK / Leave a Comment

After developing a custom portal on Microsoft CRM Portals, I like to share my experience.
This is kind of starter guide to build highly customised web pages on Microsoft CRM portals. It is a high level and very informal guide.

Libraries and frameworks

To customise the portal, a good understanding of followings are essential.

Bootstrap

Microsoft CRM portal uses Bootstrap. Use bootstrap customiser like https://www.bootstrap-live-customizer.com/ for a quick customisation.

Continue reading

Default tab (timeline) of Activity Wall in CRM Portal

Published / by AK / Leave a Comment

Configuration and customization of CRM portal is always fun. You need to pay attention to even the smallest thing as there is no one stop place to configure them. Everything is connected among CRM form, portal entity form, metadata, permissions inheritance, scripts. One mis-configuration would render the form incorrectly.

One day, we found that some portal forms are rendering Timeline area like below where it is supposed to be Notes area.

Continue reading

Client-side library for CRM Portal – Completion of first draft

Published / by AK / Leave a Comment

Before end of 2016, I started working on Microsoft CRM Portal development. I realised I have to heavily use liquid and JavaScript if I want to customise. It is true that we can use jQuery to manipulate its elements but I never want to manipulate DOMs especially in this type of application whose UI would change over the time. Coming from CRM development, it will be completely understandable. Continue reading

Duplicated attributes on CRM Portal Form

Published / by AK / Leave a Comment

Last week, my colleague showed me a strange error in one of entity forms in CRM portal. The form itself looks fine and there are no metadata defined for any attributes. The rest of entity forms were working fine. The error message is not useful. We even re-created the form and configure again, but the result was still the same.

CRM Portal error

CRM Portal error

Continue reading

SetProcessRequest and SYSTEM impersonated IOrganizationService

Published / by AK / Leave a Comment

CRM 2016 (Dynamics 365) introduces SetProcessRequest which can be used to activate a business process flow for a given record programmatically. However, it does not always work if IOrganizationService is created by using SYSTEM. To replicate it,

  1. Create a plug-in that executes on Post Operation stage of Create message on an entity (e.g., incident)
  2. Create an impersonated organization service in the plugin
    IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
    IOrganizationService service = serviceFactory.CreateOrganizationService(null);
  3. Use that service to execute the request
    service.Execute(new SetProcessRequest() { ... });

Continue reading

Hello again!

Published / by AK / Leave a Comment

I started blogging about Microsoft Dynamics CRM (now Dynamics 365) 2 years ago. I intended to post new and unusual findings rather than repeating existing topics and contents (there are a lot more blogs posting those already). I did not want to overload the information, thus I could not find enough materials to publish when I was not working with latest products.

Last year, I failed to extend my hosting, and the database backup I had in my local was outdated. As a result, I lost my old posts and I was quite lazy to do it again.

Now, I have got new materials and have started doing my own side projects during my free time. I think my works will be useful for Dynamics community and it is worth to try again publishing my findings. This time, I would like to cover wider area; not just Dynamics CRM.

Stay tuned!