Dynamic Chart in LWC With Multiple Datasets Using ChartJS

Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data. Executives and Managers in any firm are interested to visualize the data in a way that would provide insights to them. One such visualization library that is very popular and open source is ChartJS. It is a simple yet flexible JavaScript charting for designers & developers. In this blog, let us take a look on how we could use ChartJs to draw charts on a lightning web component.

In the blog, I will demonstrate how you can get data from salesforce object using apex and feed it to the bar chart. Also, will use an aggregate query to get the data summed up and will show data in two datasets. To build the UI, I’ve used two LWCs. One being a parent and another its child.

Most of the data work will be handled by the parent component and will pass as an attribute (chartConfiguration) to the child. In this blog, we are trying to show an Opportunity Bar Chart with Expected Revenue & Amount for various stages.

  1. Download the ChartJS file from here and load it as a static resource with the name ‘ChartJS’.
  2. Create a Lightning Web Component with name ‘gen_barchart’ copy the below code.


import { LightningElement, api } from 'lwc';
import chartjs from '@salesforce/resourceUrl/ChartJs';
import { loadScript } from 'lightning/platformResourceLoader';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';

export default class Gen_barchart extends LightningElement {
    @api chartConfig;

    renderedCallback() {
        if (this.isChartJsInitialized) {
        // load chartjs from the static resource
        Promise.all([loadScript(this, chartjs)])
            .then(() => {
                this.isChartJsInitialized = true;
                const ctx = this.template.querySelector('canvas.barChart').getContext('2d');
                this.chart = new window.Chart(ctx, JSON.parse(JSON.stringify(this.chartConfig)));
            .catch(error => {
                    new ShowToastEvent({
                        title: 'Error loading Chart',
                        message: error.message,
                        variant: 'error',

I have used platformResourceLoader to load the script from the static resource on a renderedCallback() lifecycle hook.


    <div class="slds-p-around_small slds-grid slds-grid--vertical-align-center slds-grid--align-center">
        <canvas class="barChart" lwc:dom="manual"></canvas>
        <div if:false={isChartJsInitialized} class="slds-col--padded slds-size--1-of-1">
            <lightning-spinner alternative-text="Loading" size="medium" variant="base"></lightning-spinner>

In the HTML I have added a canvas tag, as per the ChartJS documentation the Chartjs library uses that canvas to draw the chart.

3. Create an apex class to pull the data from salesforce. In this example I’ve used a SOQL to pull data from Opportunity and aggregated the the Amount and ExpectedRevenue field.


public class GEN_ChartController {
    public static List<AggregateResult> getOpportunities(){
        return [SELECT SUM(ExpectedRevenue) expectRevenue, SUM(Amount) amount, StageName stage 
               FROM Opportunity WHERE StageName NOT IN ('Closed Won') GROUP BY StageName LIMIT 20];

4. Create another LWC as the parent that does the data work for us.


import { LightningElement, wire } from 'lwc';
import getOpportunities from '@salesforce/apex/GEN_ChartController.getOpportunities';

export default class Gen_opportunitychart extends LightningElement {

    getOpportunities({ error, data }) {
        if (error) {
            this.error = error;
            this.chartConfiguration = undefined;
        } else if (data) {
            let chartAmtData = [];
            let chartRevData = [];
            let chartLabel = [];
            data.forEach(opp => {

            this.chartConfiguration = {
                type: 'bar',
                data: {
                    datasets: [{
                            label: 'Amount',
                            backgroundColor: "green",
                            data: chartAmtData,
                            label: 'Expected Revenue',
                            backgroundColor: "orange",
                            data: chartRevData,
                    labels: chartLabel,
                options: {},
            console.log('data => ', data);
            this.error = undefined;

From the above block of code, you can see the multiple elements in the dataset. This is how you keep adding dataset to show on the chart. The chartconfiguration as provided here defines the type of chart, data, labels and any options.


    <lightning-card title="Open Opportunities" icon-name="utility:chart">
        <template if:true={chartConfiguration}>
            <c-gen_barchart chart-config={chartConfiguration}></c-gen_barchart>


<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">

5. Create a Lightning App page to preview the chart.

The chart will be rendered as shown below:

You can download the files from the github repo here.

#Giveback #StaySafe

Identify Source of a Lightning Web Component (LWC)


Many times we may get a scenario where a lwc will be placed in community for external users and also in the app builder so that its used by internal users. Let us see how we can identify from where the component is being accessed now so that we can conditionally render the design/css.

We’ll make use of the lifecycle hook connectedCallback() to pull this information from an apex to the LWC’s javascript.

Sample Code




import { LightningElement,track } from 'lwc';
import checkPortalType from '@salesforce/apex/IdentifySource.checkPortalType';
export default class Identifysource extends LightningElement {
	@track isincommunity;
            .then(result => {
                var isInPortal = result === 'Theme3' ? true : false;
                //setting tracked property value
                this.isincommunity = isInPortal;
            .catch(error => {
                this.error = error;


public with sharing class IdentifySource {
    public static String checkPortalType() {
        return UserInfo.getUiThemeDisplayed();

If you are from the Aura background, you could relate the connectedCallback() with the doInit(). If you want to perform any logic before the element is rendered, you can add that to the connectedCallback() method. The connectedCallback() lifecycle hook fires when a component is inserted into the DOM. connectedCallback() in Lightning Web Component flows from parent to child. So we call the apex from the connectedCallback() method.

Now on the apex side, we have the UserInfo class that retrieves the UI Theme of the logged in user. This way we can identify on which theme the user has logged in. Below data shows the output of the getUiThemeDisplayed() method.

  • Theme1—Obsolete Salesforce theme
  • Theme2—Salesforce Classic 2005 user interface theme
  • Theme3—Salesforce Classic 2010 user interface theme
  • Theme4d—Modern “Lightning Experience” Salesforce theme
  • Theme4t—Salesforce mobile app theme
  • Theme4u—Lightning Console theme
  • PortalDefault—Salesforce Customer Portal theme
  • Webstore—Salesforce AppExchange theme

Open Lightning Component as Tab from Quick Action

Most of us might have opened a lightning component from a quick action button by embedding the component in the quick action. Its a nice feature that helped us to pop up UI elements from a record page. However the component was appearing in a modal. In this blog, lets try and see how we can manage to show the component in a new tab.

Text Book Lessons

We would be using a quick action from case object. The reason I’ve chosen case object is because, few objects viz., case, user profiles and work order objects, if feed tracking is enabled, quick actions appear as chatter tab. So the first task is to disable the feed tracking on case object.

The next theory that we try to understand is regarding the isUrlAddressable component. This helps you to enables you to generate a user-friendly URL for a Lightning component with the pattern /cmp/componentName instead of the base-64 encoded URL you get with the deprecated force:navigateToComponent event. If you’re currently using the force:navigateToComponent event, you can provide backward compatibility for bookmarked links by redirecting requests to a component that uses lightning:isUrlAddressable.

Finally we need to understand lightning:navigation. This component helps to navigate to a given pageReference or to generate a URL from a pageReference.

The solution

Lets look at how this works. We create a Lightning Component (LC) that uses the lightning:navigation command to create a url from the pageReference variable. The pageReference variable defined on the controller will hold the name of the LC that needs to be opened in a new tab and also any parameters that we need to append to the URL (usually record Id). We need to use pageReference Type as ‘Lightning Component’.


<!-- Component used on the Quick Action -->
<aura:component implements="force:lightningQuickAction, force:hasRecordId" >
    <lightning:navigation aura:id="navService"/>
    <aura:attribute name="pageReference" type="Object"/>
	<aura:handler name="init" action="{!c.navigateToLC}" value="{!this}" />
    Record Id:::: {!v.recordId}


    navigateToLC : function(component, event, helper) {
        var pageReference = {
            type: 'standard__component',
            attributes: {
                componentName: 'c__TabComponent'
            state: {
                c__refRecordId: component.get("v.recordId")
        component.set("v.pageReference", pageReference);
        const navService = component.find('navService');
        const pageRef = component.get('v.pageReference');
        const handleUrl = (url) => {
        const handleError = (error) => {
        navService.generateUrl(pageRef).then(handleUrl, handleError);


<!-- Component that is opened in a new tab.-->
<aura:component implements="lightning:isUrlAddressable">
    <aura:attribute name="refRecordId" type="String" />
    <aura:handler name="init" value="{!this}" action="{!c.init}" />
    <div class="slds-box slds-theme_default">
        <p>This component has been opened from QuickAction button from a record with Id : {!v.refRecordId} as a tab.</p>


	init : function(component, event, helper) {
		var pageReference = component.get("v.pageReference");
        component.set("v.refRecordId", pageReference.state.c__refRecordId);

All set. Lets click on the quick action. You can see the lightning component opened in a new tab, the url has parameter of the case record Id and its displayed on the component.

SFDC ANT Deployments using Azure Pipelines


Azure DevOps is a very powerful application that has a git-based repo and pipelines to automate tasks that is relevant to any IT process. In this blog, we are diving into the use of Azure Pipelines for Salesforce Developers to carry out deployment activities. You can use this blog to understand how you could push your metadata in your repo into a salesforce org. This blog uses Azure Repo to store the metadata and ANT based deployments. You could also use Github/Bitbucket repo to link into Azure pipeline and run the deployments from there as well. Instead of ANT based deployments, sfdx can also be leveraged, but that might need different pipeline tasks to support it. So let us get started to retrieve and deploy the metadata using ADO pipelines.


We’ll start with the assumption that you are having good experience with Salesforce ANT Migration Tool because the Azure Pipeline that we’ll build will use this migration tool at the backend. You can start by cloning this repo from my Github here. Sign up for an Azure developer edition to try this out from you personal azure dev org. You could also if your project (at work) allows, create a new repo or branch within the repo with the files from Github and set up the pipeline to do the deployments.

Process Diagram

From the above diagram, one can understand how the pipeline is configured to run. These are the steps/tasks in the pipeline file. Let us see what each step does:

  1. Checkout from the repo: This task checkouts the entire repo into the virtual machine environment. (Azure Pipelines works on a virtual machine)
  2. Run ANT Scripts. This is a standard pipeline task that is created to work in the following way:
    1. Retrieve – to just retrieve from you source org.
    2. Deploy – to deploy the metadata from the repo to the target org.
    3. Both – to do a retrieve and deployment with a single run of pipeline.
  3. Push to Repo: This task commits and pushes the files retrieved from source org to the repo from the local of the Azure virtual machine.

The pipeline that I’ve created here is a dynamic one that accepts the ANT target from a variable that user can set just before running the pipeline so that he can choose between retrieve/deploy/both. Now if you are good with the build.xml, you know with multiple targets that use a different set of username/password or by using multiple pipelines linked to each other, you can automate the complete deployment process of retrieve and deploy from one org to another. In the example I’ve in my GitHub, you many notice, am retrieving from and deploying to the same org. I have explained how this could be done between orgs in the video.

The Master File

Now lets look at the pipleine file in detail. I’ve explained the below yml file in detail in the video.

# SFDC Retrieve and Deploy sample

trigger: none

  vmImage: 'ubuntu-latest'

- checkout: self

- script: |
    echo "Build Type: $(BUILD_TYPE)"
  displayName: 'Confirming Variables'

- task: Ant@1
    buildFile: 'MyDevWorks/build.xml'
    targets: 'retrieve'
    publishJUnitResults: false
    javaHomeOption: 'JDKVersion'
  displayName: 'Retrieving from Source Org'
  condition: or(eq(variables['BUILD_TYPE'], 'retrieve'),eq(variables['BUILD_TYPE'], 'both'))

- task: Ant@1
    buildFile: 'MyDevWorks/build.xml'
    targets: 'deployCode'
    publishJUnitResults: false
    javaHomeOption: 'JDKVersion'
  displayName: 'Deploy to Target Org'
  condition: or(eq(variables['BUILD_TYPE'], 'deploy'),eq(variables['BUILD_TYPE'], 'both'))

- script: |
    echo ***All Extract Successfull!!!***
    echo ***Starting copying from VM to Repo****
    git config --local user.email "myemail@gmail.com"
    git config --local user.name "Rohit"
    git config --local http.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
    git add --all
    git commit -m "commit after extract"
    git remote rm origin
    git remote add origin https://<<<YOUR_REPO_TOKEN>>>@<<YOUR_REPO_URL>
    git push -u origin HEAD:master
  displayName: 'Push to Repo'

Summing Up

With this setup, if your project employs a DevOps strategy, you can easily create Pull Requests to your target branch. You don’t need to get into the pain of retrieving the metadata using ANT on your local, commit it on local, push to remote using Github for Desktop/TortoiseGit/SourceTree apps.

Always on Cloud. Retrieve and Deploy just by using a browser.

#Giveback #StaySafe

Dynamic Actions – A Low Code Lightning Approach


In Salesforce ecosystem, we all love the word ‘dynamic’ as this brings in a lot of flexibility to the business to reuse anything that is dynamic. Quick Actions in salesforce is a great feature after we were forced to replace the JavaScript buttons when we all migrated from classic to lightning. Let it be the object specific quick actions or be it the global quick actions. Quick actions enable users to do more in Salesforce and in the Salesforce mobile app. With custom quick actions, we can make our users’ navigation and workflow as smooth as possible by giving them convenient access to information that is most important.


Lets dive deeper with a use case. Consider we have a record page for Discount Request object with a quick action that initiates an approval process in a backend system. This action was meant for sales agents with a specific profile. The same layout was being shared with all the agents as well and they were also able to see and click the quick action. Now we have been tackling this with some validation rule on the Lightning component that integrates the approval logic. Alternatively, we were using a different record page for those profiles with and without the quick action. So how do we approach it with minimum component and a low code design?

The Solution

With Dynamic Actions, starting from Summer’20, we can prevent the other set of users not to see that action on their layout even if both set of users use the same layout/record page. So how do we do this? Let us follow the below steps:

  1. Navigate to any record page and choose the highlights panel. Select the option “Enable Dynamic Actions”.
  2. Choose ‘Add Action button’
  3. From the Search field choose your Action.
  4. Add the filter.

I have chosen the quick action to be only visible by the Sales Agent Lead Profile.

With this setup, now we display the quick action only to the Sales Agent Lead Profile The other sales agent profile who share the same record page however does not see the button.

Final Thoughts

With the rise of Citizen Developers and a low code approach across industries and customers, this feature adds a lot of flexibility to reuse the existing layouts and record page without the need to further add more component or to have logic in custom components. One limitation with Dynamic Action is that currently this is supported on record pages for custom objects alone.

Configure Case Deflection Metrics on Community Cloud

Now that you have built your customer community and customers are flowing into the community to view latest products, get help from community members, view knowledge articles etc. In a case to measure the community effectiveness, the community manager wants to generate reports on how well the articles are helping the customers. Community manager wants to see which articles help the customers the most, how many cases were stopped because the customer chose not to create it by seeing an article. To view these metrics, salesforce has a package: ‘Salesforce Case Deflection Reporting Package for Lightning Communities’. Below is the link to the package on AppExchange. This package has dashboard that shows insights into how well the Contact Support Form and Case Deflection components actually deflect cases from being created in your Lightning communities.


How does this work?

The contact support form component that creates the case record is placed in the lightning community along with the case deflection component. The ‘Case Deflection Component‘ searches text as it’s being entered into the Contact Support Form component and returns relevant articles and discussions. If users don’t get the answer they need, they can continue with their request for support. This lightningcommunity:deflectionSignal (system event) is fired in a Lightning community when a user is deflected away from creating a customer case. After viewing an article or discussion in a community, the user is asked if the interaction was helpful, and whether they want to stop creating their case.

Let us now look at the below video to understand how we can setup a community using the case deflection metrics. For the purpose of the demo, I will show just one deflection, hence the report and dashboards may not look really great. But it will definitely serve the purpose to understand how to setup case deflection metrics. Lets see what are the essential components that is required to be added to the community.

Quick Demo

Assign Task Ownership to a Queue

During my salesforce journey, I’ve come across multiple scenarios where customer provides a way of open work culture within a team and with salesforce it was always easy with the use of queues. For e.g., A Case record when assigned to a queue can bring in an entire team’s attention and can manage efficiently. I once got a requirement of having a task to be assigned to a queue. Background is that the task was created whenever a new customer was onboarded, and the service team wanted to send out an email on welcome/onboarding formalities. The task was assigned to a specific person and it resulted in missed tasks when the task assignee was unavailable for some reason.

What is New?

Starting Spring ’20, salesforce now allows a user to share their workload by setting up queues for tasks. Users can assign tasks to a queue, and then individuals in the queue can take ownership of those tasks from a list view.

Let’s see how we can setup a task that is assigned to queue with the below screenshots.

Step 1: Create a queue with Task as supported object. Add the required users to the queue.


Step 2: Create a new Task. From the image below, we can see the new options of assigned a task to a user/group/queue. This is also supported via APEX logic by which WhoId field is assigned with the Queue Id. For this demo, lets choose Queue and create the record as explained in the next image.


The below diagram shows how this could be viewed when logged in as a user in the queue from the tasks tab and the Onboarding team list view. Now it’s up to the user if he wants to assign that to himself or another user (with required permission).



However, this feature of assigned to a queue was not possible if a user tried to create a task via the Lightning page component. The option to choose between user/group/queue was not available from the sidebar as shown below. Hope salesforce will bring a fix to that or a workaround.


With this Spring ’20 feature, a user can assign tasks to a queue, those tasks are available to members of the queue, which means everyone can contribute without waiting for work to be delegated or reassigned. More Productivity More Happy Customers.

Lightning Table with Run-time actions

Salesforce has made developer’s life easy by introducing lightning:datatable from API version 41.0. Even since then there have been a lot of enhancement to the component with more and more attributes. In this blog, we will look at, how, using ‘onrowaction’ attribute we could create actions on the fly for the row that the user has chosen. These actions are user created and hence we can include the required business logic to perform on the specific rows.

Salesforce Way

The below diagram shows how salesforce has implemented it in their documentation. As per the documentation, we can see they have made the actions if not applicable in disabled state.

Fig. 1

How Actions work on lightning:datatable?

The below code defines the columns for this lightning data table.

var rowActions = helper.getRowActions.bind(this, cmp);
//Defining the Columns
cmp.set('v.columns', [
    { label: 'Name', fieldName: 'name', type: 'text' },
    { label: 'Author', fieldName: 'author', type: 'text' },
    { label: 'Publishing State', fieldName: 'published', type: 'text' },
    { label: 'Action', type: 'action', typeAttributes: { rowActions: rowActions } }

The last column has a parameter : typeAttributes: { rowActions: rowActions } – because of this rowActions, the rowActions variable is invoked and it calls the helper method binding the row and the component. The row action helper at this point runs the javascript code as explained above that checks the row’s ‘Published State’ property and dynamically displays the options for that row.

To understand how the actions work and how to generate run-time actions, I’ve made a minor tweak to the javascript code from the documentation as below. From the code, it’s evident that the actions are displayed from an array that we have defined. Clearly one can see that it’s the variables in the array that explains what actions are showed up. The variables in the array must be of type ‘object’ (For Eg: var myaction = {};), This is for us to add properties to the variables and we use the below parameters for that “action” object.

  • Label
  • IconName
  • Name
  • Disabled
getRowActions : function (cmp, row, doneCallback) {
    var actions = [];
    var showdetailAction = {
        'label': 'Show Details',
        'iconName': 'utility:zoomin',
        'name': 'show_details'
    var deleteAction = {
        'label': 'Delete',
        'iconName': 'utility:delete',
        'name': 'delete'
    if (row['published'] === 'Printed') {
        deleteAction['disabled'] = 'true';

The above code checks if the published column for that row has ‘Printed’. If then we add a parameter disabled and sets as true for that action object. This behavior is as explained by Fig. 1.

Our Way

We will now see how we can prevent the actions to appear on the drop down if any action is not applicable for that row. For us to change this behavior and show only the action that is only enabled on the table, change the lines 14 till 16 as below on the above code snippet

if (row['published'] !== 'Printed') {

This way we are preventing the addition of the action object into action array that is not required to be shown on the drop down as explained by below image.

Either way for any value for ‘Publishing State’ other than ‘Printed’ will have below view.


One thing to be considered here is that the data is loaded to the lightning datatatable and the row actions are displayed with the value from DOM. So, incase you want the actions to reflect as per the server values for that row and column, it is important to keep the data as well updated before the binding happens. I’ll try to cover this scenario in another blog. For any use case with user updating it from the same lightning component, we can manage it via inline editing attribute for the lightning:datatable.




Automate SFDC Data Export Using ADO

Data export has been a hot topic ever since the inception of salesforce and there are a lot of tools that help you to automate this task. There are tools available to automate the process as well. Probably these tools all generate either on a local drive or even might be cloud servers. How about the data extract that could be available on your repo! Yes, you heard it right. Its possible. It has been possible since long but then after Azure DevOps (ADO) pipelines popular in the market this has become much easier to implement. The same setup that I’ll be explaining could be modified a bit to run it from Docker or Jenkins as well. However, lets focus our discussion on setting up this task on ADO.

Process Flow


Setup Dataloader

The dataloader comes with its Command Line part of it. Command Line dataloader is the way by which one could run the dataloader via the command line. This way it used a process-conf.xml file that holds the task details to be performed. Install the latest version of dataloader from your salesforce org and the zulu OpenJDK. Salesforce Dataloader uses this JDK library and the path variable must be set for this in your machine to run and test it locally. For the ADO setup, I’ll explain further down as how we could install this JDK when we run the job.

Encrypt your password using the encrypt.bat file as outlined in the official documentation. Also, setup the process-conf.xml file in the samples folder. In this example, I’ve used two beans (that’s how its is called in the command line dataloader), one for Account extract and another for Contact extract.

Create YML Script

Now its time to create the YML file. This file is for the ADO job to pickup and do the actions as we have mentioned in it. Create an empty yml file and add the below code and save it.

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger: none
  vmImage: 'windows-latest'
- task: JavaToolInstaller@0
    versionSpec: '11'
    jdkArchitectureOption: 'x86'
    jdkSourceOption: 'LocalDirectory'
    jdkFile: 'build/setups/zulu13.29.9-ca-jdk13.0.2-win_x64.zip'
    jdkDestinationDirectory: '/builds/binaries/externals'
    cleanDestinationDirectory: true
- script: |
    mkdir extractFiles
    cd build/dataLoaderApp/bin
    echo ******Starting Customer Extract.....*******
    echo -----------------------------------
    echo Extracting Account...
    echo -----------------------------------
    call process.bat "D:/a/1/s/build/dataLoaderApp/samples/conf" "accountExtract"
    echo --------------------------------------------------------
    echo Account extraction completed successfully!
    echo --------------------------------------------------------    
  displayName: 'Account Extract'
- script: |
    cd build/dataLoaderApp/bin
    echo ------------------------
    echo Extracting Contact...
    echo ------------------------
    call process.bat "D:/a/1/s/build/dataLoaderApp/samples/conf" "contactExtract"
    echo ----------------------------------------------
    echo Contact  extraction completed successfully!
    echo ----------------------------------------------  
  displayName: 'Contact Extract'
- script: |
    echo ***All Extract Successfull!!!***
    echo ***Starting copying from VM to Repo****
    git config --local user.email "youremail@email.com
    git config --local user.name "Rohit"
    git config --local http.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
    git add extractFiles/\*.csv
    git commit -m "commit after extract"
    git remote rm origin
    git remote add origin <Repo URL>
    # Replace the username with password in the url in the format https://<password>@dev.azure.com/..../../.../
    git push -u origin HEAD:master
  displayName: 'Push to Repo'

Setup ADO Pipeline

Now its time to move on to git and setup the pipeline. Limiting to the scope of this blog, am not going into details of ADO and pipelines, lets focus on the dataloader automation part. ADO can work with any git repo and in this tutorial, we’ll use azure repo itself.

There is a free version of Azure that you could sign up for and in this tutorial, I’ll use my personal azure instance.

Get yours by visiting here. Choose Sign up, create an account. After that login to your azure and follow the below steps:

  1. Create a new repo.
  2. Initialize the repo with readme file
  3. Clone the repo to your local.
  4. Merge the below files/folder.
    1. YML file
    1. Dataloader folder
    1. Zulu OpenJDK zip.
  5. Commit the changes.
  6. Push to Remote.

Now you have the required files on your branch/repo and its time to create a pipeline job. Choose the pipeline account and click on pipeline.


Follow the below steps:

  • Choose New Pipeline.
  • Choose ‘Azure Repos Git’
  • Select your repo.
  • Choose existing pipelines YAML file.
  • Enter YML file path
  • Choose Continue at the bottom
  • At his point you can preview the YML file. C
  • Choose Save.
  • Click on Run Pipeline to run the job.

You can see the job status on choosing the job. Once the job ran successfully, you can see the extracted files in the extractFiles folder on the repo.



You saw how the files got extracted and was committed to the repo. An ADO job assigns an agent that you specify in the yml and runs the scripts/tasks on that vm environment. In this example we have used the vm image as windows. This is because command line dataloader works only on a windows environment. This job was manually run and for you to schedule it, for e.g., to run first of every month, you need to add triggers with a CRON expression. I will have this covered in the upcoming video.

- cron: "0 10 1 * *"
  displayName: First of Month 10AM Build
    - master