Nuget package install error “Could not find a part of path” in Visual Studio

Dear Reader,

Off late i was observing this issue occurring to me and few of my colleagues in team whenever we tried to install nuget package from Visual studio 2022 Enterprise “Manage Nuget packages” view. If the nuget package name is bit lengthy or its a pre-release; then comes the problem. The error we mostly observed in the Output window is as shown:

I searched alot on google/github issues page. Most of them suggested to do or claimed following:

  • This issue is already resolved in previous releases of VS and should not happen
  • Modify Windows Registry to increase LongPathsEnabled variable and reboot + add manifests
  • Clear nuget package cache
  • Move Nuget package location from %appdata% location to much shorter location say D:\
  • Use latest Nuget CLI tool

After trying above all; it was a dead end.
Ultimately i tried via dotnet CLI command and viola it worked as shown:

Hope it helps you if you are stuck! Any suggestions are welcome

Thanks πŸ™‚

Azure Webapp with Custom Email provider

Dear Reader,

Recently as part of my project task, i had to do a POC for which i had to send emails via our own company provided SMTP service. When i started searching on google, many of them pointed out to using Mailkit library for .NET Core projects due to its cross-platform nature.

There are many other documents i had to scan through in order to make things work in azure environment (app service).

After wasting almost half a day with various permutations and referring to different documents; i ultimately figured it out. Below are the few points to be noted:

  • Port 25 is blocked by default in Azure VM (App service, Functions, Logic apps, etc.) environment
  • With special support request it can be unblocked from Microsoft.
  • SMTP Authentication ports are allowed by default i.e 587 and 465
  • In my scenario, we opted for simple SMTP Auth rather than OAuth2 protocols for our project
  • You can check if the SMTP connection works from your app service/VM via Kudu console with the command “tcpping smtp.abcd.com:587
  • In order for traffic to reach SMTP server from VM environment; outbound traffic rule must be added/defined in NSG as shown:
  • The code which facilitates for sending email is:
var email = new MimeMessage();//Use Mailkit
email.From.Add(MailboxAddress.Parse("noreply-abc.com"));
email.Subject = "Test Email";
email.Body = new TextPart(TextFormat.Plain) { Text = "This is a test mail from Azure" };

using var smtp = new SmtpClient();
smtp.CheckCertificateRevocation = false; //Required as we do not use SSL based communication over PORT 465 for our requirement.
smtp.SslProtocols = System.Security.Authentication.SslProtocols.Tls12; //Preferred to make it explicit.
smtp.Connect("smtp.abc.com", 587, SecureSocketOptions.Auto);
smtp.Authenticate("username", "pwd");

email.To.Clear();
email.To.Add(MailboxAddress.Parse(toEmail));
smtp.Send(email);

Thanks for your feedback πŸ™‚

Postgres: Check column exists and get value; refactoring

Dear Reader,

In the application i work on; we have DAL layer which takes care of all the DB related actions. We use Postgre SQL; hence we by default use Npgsql library.

While i was developing an feature today; i had to convert DB results to .NET object. To do that; we have below code as shown for various supported datatypes.

 public static bool GetBoolean(NpgsqlDataReader reader, string column)
            => !reader.IsDBNull(reader.GetOrdinal(column)) && Convert.ToBoolean(reader[column]);

        public static int GetInt32(NpgsqlDataReader reader, string column)
            => !reader.IsDBNull(reader.GetOrdinal(column)) ? Convert.ToInt32(reader[column]) : 0;

        public static long GetLong(NpgsqlDataReader reader, string column) => !reader.IsDBNull(reader.GetOrdinal(column)) ? Convert.ToInt64(reader[column]) : 0;

        public static string GetString(NpgsqlDataReader reader, string column)
            => !reader.IsDBNull(reader.GetOrdinal(column)) ? Convert.ToString(reader[column]) : null;

        public static DateTime? GetDateTime(NpgsqlDataReader reader, string column)
            => !reader.IsDBNull(reader.GetOrdinal(column)) ? DateTime.Parse(Convert.ToString(reader[column])) : null;

        public static bool IsColumnExists(IDataRecord dr, string column)
        {            
                return dr.GetOrdinal(column) >= 0;                        
        }

For my requirement where my DB stored proc script returns a table with numerous columns and numerous rows; i had to convert appropriately.

Hence i started writing code by using above code; my consuming code goes something like:

if (await postgreReader.ReadAsync())
            {
                if (SqlHelper.IsColumnExists(postgreReader, Constants.USER_ID))
                    userDetails.Id= SqlHelper.GetString(postgreReader, Constants.USER_ID);

                if (SqlHelper.IsColumnExists(postgreReader, Constants.USERNAME))
                    userDetails.Name= SqlHelper.GetString(postgreReader, Constants.USERNAME);

                if (SqlHelper.IsColumnExists(postgreReader, Constants.USER_LASTNAME))
                    userDetails.LastName= SqlHelper.GetString(postgreReader, Constants.USER_LASTNAME);
                if (SqlHelper.IsColumnExists(postgreReader, Constants.LANGUAGE))
                    userDetails.Language = SqlHelper.GetString(postgreReader, Constants.LANGUAGE);
                if (SqlHelper.IsColumnExists(postgreReader, Constants.TIMEZONE))
                    userDetails.TimeZone = SqlHelper.GetString(postgreReader, Constants.TIMEZONE);    
// a few more like above...           
            }

Once i wrote above code; it started bothering me. This has been trend in whole DAL layer unfortunately. For many reasons; i did not think of refactoring in the past as well.

Time to get my hands dirty.. πŸ˜‰ Refactoring!! ❀

This is one of the joyous moment for me; coz it intrigues me alot and puts me into challenging mode. Kind of a brain teaser; i suppose.

After thinking for a minute; i straight away thought of using Reflection concept in C#. So i wrote below helper function which is extensible and works for any type:

 public T ConvertTo<T>(NpgsqlDataReader reader, Dictionary<string, string> columnNamePropertyNamePairs) where T : new()
        {
            T typeInstance = new();
            Dictionary<Type, Func<NpgsqlDataReader, string, object>> typeToGetValueLookUp = new()
            {
                { typeof(string), (reader, column) => GetString(reader, column) },
                { typeof(long), (reader, column) => GetLong(reader, column) },
                { typeof(DateTime), (reader, column) => GetDateTime(reader, column) },
            };

            foreach (var item in columnNamePropertyNamePairs)
            {
                if (IsColumnExists(reader, item.Key))
                {
                    PropertyInfo propInfo = typeInstance.GetType().GetProperty(item.Value, BindingFlags.Public | BindingFlags.Instance);

                    if (propInfo != null)
                    {
                        propInfo.SetValue(typeInstance, typeToGetValueLookUp[propInfo.PropertyType](reader, item.Key));
                    }
                }
            }

            return typeInstance;
        }

And to consume above code; i would write it as:

UserPersonalDetailsModel userPersonalDetailsModel = new();

                      userPersonalDetailsModel = ConvertTo<UserPersonalDetailsModel>(reader,
                          new()
                          {
                              { Constants.USER_ID, nameof(UserPersonalDetailsModel.UserId) },
                              { Constants.GIVEN_NAME, nameof(UserPersonalDetailsModel.GivenName) },
                              { Constants.FAMILY_NAME, nameof(UserPersonalDetailsModel.FamilyName) },
                              { Constants.EMAIL_ID, nameof(UserPersonalDetailsModel.EmailId) },
                              { Constants.COMPANY, nameof(UserPersonalDetailsModel.Company) },                             
                              { Constants.CREATED_ON, nameof(UserPersonalDetailsModel.CreatedOn) }
                          });

This way; most of the if,else ladder goes off and consuming code becomes fairly simple and easy to read.

Please provide your feedback.

Thanks πŸ™‚

Azure Blob logging with Serilog on ASP Net Core 5.0 WebAPI

Dear Reader,

Lately i was tasked to implement application logging to Azure Blob container as well as webdiagnostics via Log Stream.

With lots of research and referring to various github issues, blogs and MS Docs; i must say shame on Microsoft, Serilog for not maintaining proper documentation which behaves exactly as mentioned.

Below i shall elaborate my experience in accomplishing the said activity and final outcome.

First make sure your NSG rules are proper between Resource Group in which your appservices runs and storage account. In our case; we had configured to block all outgoing connection except specific needed ones.

Add the rules as shown below:

If there are any firewalls configured, please configure it to allow traffic

Next get the access keys of the storage account; for that navigate to Storage -> <storage-account>->Access keys -> Key1 -> Connection String; copy the value. Btw you can use Key 1 or 2; it does not matter; unless you wish to rotate one of them.

Now lets move on to the application side. Here i am using serilog assemblies for file, console and azure-blob logging.

Below is the code structure which is working for me:

 public static class Program
    {
        public static void Main(string[] args)
        {
            try
            {
                CreateHostBuilder(args).Build().Run();
                Console.WriteLine("CreateHostBuilder(args).Build().Run() succesfull");
            }
            catch (Exception ex)
            {
                Trace.TraceError($"Host terminated unexpectedly : {ex}");
                Console.WriteLine($"Host terminated unexpectedly : {ex}");
            }
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
             .ConfigureAppConfiguration((context, config) =>
             {
                 Console.WriteLine($"{context.HostingEnvironment.EnvironmentName}");
                 AzureConfigurationServices.ConnectAzureKeyVault(context, config);
                 Console.WriteLine("KeyVault configured. Now configuring Logging services");
                 LoggerServices.ConfigureLogger(context, config);//Make sure runs always keyvault configuration load
             })
             .ConfigureWebHostDefaults(webBuilder =>
             {
                 webBuilder.UseStartup<Startup>()
                 .UseAzureAppServices();
             })
            .UseSerilog();
    }
public static void ConfigureLogger(HostBuilderContext context, IConfigurationBuilder configBuilder)
        {
            try
            {
                var environment = context.HostingEnvironment.EnvironmentName;
                var configuration = configBuilder.Build();
                var logFilePath = "";
                if (environment is "Development" or "Integration")
                    logFilePath = Path.GetTempPath();
                logFilePath = configuration["Azure:LogFilePath"];//Ref: https://ml-software.ch/posts/writing-to-azure-diagnostics-log-stream-using-serilog

                logFilePath = Path.Combine(logFilePath, configuration.GetValue<string>("LogFolderName"));
                Console.WriteLine("LogFile Path {0}", logFilePath);

                Enum.TryParse(configuration["Logging:LogLevel:Default"], true, out Serilog.Events.LogEventLevel restrictedToMinimumLevel);
                LoggerConfiguration loggerConfig = new();
                loggerConfig.WriteTo.File(path: logFilePath + "Backend-.txt", restrictedToMinimumLevel: restrictedToMinimumLevel, rollingInterval: RollingInterval.Day);
                loggerConfig.WriteTo.Console(restrictedToMinimumLevel: restrictedToMinimumLevel);

                if (environment is AZURE_DEV_ENV_NAME or AZURE_PROD_ENV_NAME or AZURE_STAGING_ENV_NAME)
                {
                    Log.Information("Configuring azure blob now...{0} {1} {2} {3}", configuration["Azure:BlobStorage:ConnectionString"],
                        configuration["Azure:BlobStorage:Container"], configuration["Azure:BlobStorage:WritePeriodInSeconds"], configuration["Azure:BlobStorage:WriteBatchLimitNumber"]);
                    loggerConfig.WriteTo.Async(config =>
                    {
                        config.AzureBlobStorage(connectionString: configuration["Azure:BlobStorage:ConnectionString"],
                        restrictedToMinimumLevel: restrictedToMinimumLevel,
                        storageContainerName: configuration["Azure:BlobStorage:Container"],
                        storageFileName: "Backend-{dd}-{MM}-{yyyy}.txt", writeInBatches: true,
                        period: new TimeSpan(0, 0, int.Parse(configuration["Azure:BlobStorage:WritePeriodInSeconds"])), batchPostingLimit: int.Parse(configuration["Azure:BlobStorage:WriteBatchLimitNumber"]));
                    });
                }
                Log.Logger = loggerConfig.CreateLogger();
            }
            catch (Exception e)
            {
                Console.WriteLine("Exception in logger service {0}", e);
            }
        }

As you can see from above code; in ConfigureLogger(..) method; i am setting file paths based on the environment; this is due to in local machines one may or may not have D drive and same path. However in azure app services; D drive is available. Without this; log stream wont be able to pick the log data to be shown.

For Azure blob logging; i am using Async API; with direct AzureBlobStorage(…) API; the application would not start and would get stuck in trying to write to Azure blob. This used to make azure app services restart my application repeatedly. Unfortunately; even in serilog documents; it is described as such 😦

I also turned off App service logs option from app services as shown and this has no affect on the blob logging.

Thanks and please drop in your comments πŸ™‚

References:
1. https://hovermind.com/aspnet-core/logging.html
2. https://ml-software.ch/posts/writing-to-azure-diagnostics-log-stream-using-serilog
3. Github issues
4. Serilog and MS Documents

PostgreSQL DB Data compare

Hi Reader,

Recently i wanted to do Data comparison between 1 DB with another. This i wanted to do especially when Database migration (application releases) is done from Vx to Vy and it has to be made sure that only schema changes are done without changing any existing data especially in case of production DB.

I started looking for tools; pgadmin provides schema diff tool; but not data comparison tool. After googling for a while; i did not find much options.
So i started taking matters into my own hand. πŸ˜‰

I wanted it to be simple; and stupid. After extensive research; i came up with below script in powershell. It was fun learning powershell scripting from scratch though.

Data comparison:

[cmdletbinding()]
param (
    $databaseName = "Database",
    $hostname = "localhost",
    $username = "postgres",
    $password = "abcd",
    $isBackUpDB = $false    
)

$psqlBinPath = "C:\Program Files\PostgreSQL\11\bin"
Set-Item -Path Env:PGPASSWORD -Value $password

Function Write-Log([string]$logMessage, [int]$level = 0) {
    $logdate = Get-Date -format "yyyy-MM-dd HH:mm:ss"
    if ($level -eq 0) {
        $logMessage = "[INFO] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-host $text -ForegroundColor Green
    }
    if ($level -eq 1) {
        $logMessage = "[WARNING] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-Host $text -ForegroundColor Yellow
    }
    if ($level -eq 2) {
        $logMessage = "[ERROR] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-Host $text -ForegroundColor Red
    }
    $text >> $logFile
}

function StartProcess ($filename, $arguments) {
    Write-Log "Executing $filename started"
    $pinfo = New-Object System.Diagnostics.ProcessStartInfo
    $pinfo.FileName = "$filename"
    $pinfo.RedirectStandardError = $true
    $pinfo.RedirectStandardOutput = $true
    $pinfo.UseShellExecute = $false
    $pinfo.Arguments = "$arguments"    
    $p = New-Object System.Diagnostics.Process
    $p.StartInfo = $pinfo
    $p.Start() | Out-Null
    $p.WaitForExit()
    $stdout = $p.StandardOutput.ReadToEnd()
    $stderr = $p.StandardError.ReadToEnd()
    Write-Log "stdout: $stdout"
    Write-Log "stderr: $stderr"
    if ($p.ExitCode -ne 0 ) {
        Write-Log "Operation Failed!" 2
        Exit 1
    }
    Write-Log "Executing $filename completed"
    return $stdout
}

function GetTablesNamesForDatabase {

    write-log "Fetching names of all tables in public schema $psqlBinPath"    
    StartProcess "$psqlBinPath\psql.exe" "-U $username -d $databaseName -c `"\COPY (select table_name from information_schema.tables where table_schema = 'public') TO '$env:TEMP\tables.txt'`"" > $null
    $tableNamesArray = [System.Collections.ArrayList]@()
    foreach ($line in Get-Content $env:TEMP\tables.txt) {
        $tableNamesArray.Add($line.ToString()) > $null
    }
       
    return $tableNamesArray
}

function GetDumpOfEachTable {
   
    Remove-Item $env:TEMP\$dbDumpFolderName -Recurse -ErrorAction Ignore > $null
    New-Item -ItemType directory -Path $env:TEMP\$dbDumpFolderName > $null
     
    for ($i = 0; $i -lt $arrayOfNames.Count; $i++) {     
        $lineTemp = $arrayOfNames[$i]
        Write-log "Dumping for  $lineTemp"        
            
        StartProcess "$psqlBinPath\psql.exe" "-U $username -d $databaseName -c `"\COPY (select * from \`"$lineTemp\`") TO '$env:TEMP\$dbDumpFolderName\$lineTemp.txt' WITH DELIMITER ','`"" > $null               
    }
}
# ------------------ Main ----------------------
Write-log "Startig script"
$dbDumpFolderName = $databaseName

$file = ("$dbDumpFolderName.log")

if ($isBackUpDB -eq $true) {
    $dbDumpFolderName = -join ($databaseName, "_backup")
    $file = ("$dbDumpFolderName.log")
}
$logpath = Resolve-Path -Path $env:TEMP 
$logFile = Join-Path $logpath $file
Remove-Item –path $logFile -ErrorAction Ignore >$null
$arrayOfNames = [System.Collections.ArrayList](GetTablesNamesForDatabase)

GetDumpOfEachTable

Write-Log "Dumping Database Done"

Above code is self explanatory. I am using Postgresql command “\copy” for taking the Data dump of all available tables from public schema and for each table; again a dump is taken of table data in CSV format into their respective folders.

Above script i wanted to run twice once before Database schema modifications are done as part of Application version migration.

Once the migration is done; next is to compare the data of each DB tables line by line with below script.

[cmdletbinding()]
param (   
    $backUpDumpFolderName = "Database_backup",
    $migratedDumpFolderName = "Database"
)

$filesWithDifference = @{}

function CompareFiles {
    $backUpDBSqlFiles = Get-ChildItem -Path $backUpDumpFolderName -Filter '*.txt'
    $migratedDBSqlFiles = @(Get-ChildItem -Path $migratedDumpFolderName -Filter '*.txt')
    
    foreach ($backUpDBFile in $backUpDBSqlFiles) {                         
        $migratedDBFile = $migratedDBSqlFiles | Where-Object { $_.Name -contains $backUpDBFile.Name }
        Write-Log "Getting content for $backUpDBFile.FullName and $migratedDBFile.FullName"
        $migratedDBFileContent = Get-Content -Path $migratedDBFile.FullName 
        $backupDBFileContent = Get-Content -Path $backUpDBFile.FullName
               
        if ($migratedDBFileContent.Length -lt $backupDBFileContent.Length) {
            $migratedDBFileContentLength = $migratedDBFileContent.Length
            $backupDBFileContentLength = $backupDBFileContent.Length
            Write-Log "Migrated DB Table File length $migratedDBFileContentLength is less than backup DB Table length $backupDBFileContentLength" 2
            $filesWithDifference.Add($migratedDBFile.FullName, $backUpDBFile.FullName)
            continue
        }           

        for ($i = 0; $i -lt $backupDBFileContent.Length; $i++ ) {           
            if ($migratedDBFileContent[$i].Contains($backupDBFileContent[$i])) {
                continue                    
            }
            else {
                $filesWithDifference.Add($migratedDBFile.FullName, $backUpDBFile.FullName)
                break
            }
        }        
    }
}

Function Write-Log([string]$logMessage, [int]$level = 0) {
    $logdate = Get-Date -format "yyyy-MM-dd HH:mm:ss"
    if ($level -eq 0) {
        $logMessage = "[INFO] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-host $text -ForegroundColor Green
    }
    if ($level -eq 1) {
        $logMessage = "[WARNING] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-host $text -ForegroundColor Yellow
    }
    if ($level -eq 2) {
        $logMessage = "[ERROR] " + $logMessage
        $text = "[" + $logdate + "] - " + $logMessage
        Write-host $text -ForegroundColor Red
    }
    $text >> $logFile
}

# ------------------ Main ----------------------
Write-Log ""
Write-Log "Startig Comparing script"

$logpath = Resolve-Path -Path $env:TEMP 
$file = ("DBDataDiff.log")
$logFile = Join-Path $logpath $file
Remove-Item –path $logFile -ErrorAction Ignore >$null

$backUpDumpFolderName = $env:TEMP + "\$backUpDumpFolderName"
$migratedDumpFolderName = $env:TEMP + "\$migratedDumpFolderName"

CompareFiles
if ($filesWithDifference.Count -gt 0) {
    Write-Log "Operation Failed. Found content deleted/changed in migrated Database dump for below files" 2
  
    foreach ($ele in $filesWithDifference.GetEnumerator()) {
        Write-log "MigratedFile= $($ele.Name) BackupFile= $($ele.Value)" 2
    }
}
else {
    Write-Log "Success; no data disparity found!"
}

The above script scans through each file and read contents of each file between 2 folders (DB data dumps) and compares the availability of backup DB data in the migrated DB data.

The downside of this script is that performance exponentially grows based on number of Table rows and number of tables.

Some of the considerations i made for simple script are:
1. Only existing schema data is verified in new DB dump than the schema itself
2. Table column names changes are not considered

Hope it helps. Let me know any other/better ways to do the job. Happy to learn.

Thanks

ASPNET Core HttpClient unauthorised error

Dear Reader,
Recently i faced this weird issue while working in my ASP NET Core web api project which communicates to another Core app via httpClient hosted in an remote server and developed by another team.

Both the applications were using JWT authentication and independently works.
The weird part here is that; through postman running on local; the Core App (App2) hosted in remote machine responds Http 200. But with App1 running in local; httpclient throws unauthorized error.

  • Initial suspect was that the App1 configuration is wrong w.r.t JWT audience and other Auth0 configuration.
  • Further debugging showed that the token was fetched properly. Same token when passed through postman would work.
  • I added http logs in App1. Still no luck.
  • On Postman, further investigation i understood that it was adding an header called “host” automatically.

Next i disabled host header; then postman started receiving “unauthorized error”. Weird!

This also gave me a clue; so i added the same “host” header in the app1 code and tried; bummer! No luck.

So i started analyzed App2 code which was developed by a different team against my project code. While analyzing both the code base; i noticed that we were using below code missing in App2.

services.AddCors(options =>
{
options.AddPolicy(name: "Origins",
builder =>
{
builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod();
});
});

Adding above code into App2 startup.cs and deploying again to remote server started working flawlessly.

I still do not understand how without above code did not work through HttpClient and after adding it started working all over.

Though i managed to solve the issue at hand; now its bugging me more for not knowing the internals.

Please drop in your comments.

Thanks

AWS Cognito offloading to Serverless Architecture

Dear Reader,

In my project, we use Angular for front-end app and Spring boot for backend. Since ours is a cloud based application; we use AWS Cloud infra for almost everything.
We use AWS Cognito infrastructure for complete authentication workflow. Typically like any project, we also have 3 build type; i.e dev, QA, Production. We have created different Cognito pools for different builds.

All these client pool information needs to be stored in the front-end environment specific files as shown:

I got the opportunity in proposing an idea to my architect for offloading the authentication strategy to server side and complete server less as shown below architecture diagram:

How it works:
    • Any incoming API with endpoint /login or /logout will be re-routed to the environment specific Lambda functions by the API Gateway.
    • The lambdas gets the credential information authenticated by Cognito service and returns the JWT token and other necessary information including any proprietary info; if required.
    • Remaining API’s will be re-routed to the EC2 as normally would.
    • All the EC2 will again check the validity of the token passed in the API’s with cognito as shown above.
Advantages:
    • Client (front-end) has no knowledge about the authentication algorithm/services used in the backend.
    • On the fly; the authentication algorithm can be changed without need of front-end deployment/changes
    • Authentication mechanism can be implemented in any lambda supported programming language
    • Extra propriety steps can be added to the authentication algorithm; if required (AWS step functions)
    • Since serverless, low cost, highly scalable, available and flexible.
Disadvantages:
    • Client has the burden of encrypting the payload during login

Thats all folks, let me know your thoughts.

Thanks πŸ™‚

Lazy load PlotlyJS in Angular via Service

Dear Reader,

Recently i noticed in my project that the PlotlyJS library was loading the JS file which is of 3.2MB loaded upfront when the application gets loaded.
In this project, until the user navigates to the graph related view, the plotly is of no use.

When i did some performance vs memory analysis of the application, i used to see around 3MB+ of memory being used and 3-4MB of data being downloaded every time the application is loaded fresh (browser cache refresh).

This was annoying to me especially when i used to work on the data network than Wifi. So i started investigating on how to make it dynamic loadable.

I found below article on google which was used for some other library : https://thecodeframework.com/angular-how-to-lazy-load-external-scripts/

On the same principle; i went ahead and implemented and it works πŸ˜‰
Below is the service code which does the work:


loadDynamicScript(): Promise<any> {
let that: this = this;
return new Promise((resolve, reject) => {
if (isNullOrUndefined(window.document.getElementById(this.id))) {
const scriptElement = window.document.createElement('script');
scriptElement.id = this.id;
scriptElement.src = 'https://<Remote storage URL>/external-widgets/plotly.js/1.48.3/plotly.min.js';
scriptElement.onload = () => {
resolve();
};
scriptElement.oncancel = () => {
window.document.getElementById(this.id).remove();
reject();
};
scriptElement.onerror = () => {
window.document.getElementById(this.id).remove();
reject();
};
window.document.body.appendChild(scriptElement);
}
else resolve();
})
}

view raw

plotly-loader

hosted with ❤ by GitHub

On line 5 i am checking if the script is already loaded by ID, because i wanted my service also to be dynamically loaded or created by Angular and injected in the modules required and not instantiated at root level.

In the component it is consumed as shown:


Plotly : any;
constructor( private dynamicScriptLoaderService: PlotlyDynamicScriptLoaderService){}
ngOnInit(){
this.dynamicScriptLoaderService.loadDynamicScript()
.then(
() => {
this.Plotly = (window as any).Plotly;
},
() => {
this.isError = true;
console.log("Error loading graph");
})
}

Hope it helps.

Please leave a comment.

Thanks

AWS ECS – Containers with Spring boot and mistakes

Dear Reader,

Though there are tons of articles/videos describing steps by steps process; in this article i shall not repeat them.

The agenda here is just to highlight those aha/gotcha/mistakes iΒ  learnt points during the whole process of deployment and making it to work.

I followed this video series tutorials for ECS EC2 containerizationΒ  from Mayank; https://www.youtube.com/watch?v=oyWOkGPgaM0 and i would strongly recommend to follow his channel. Great stuffs!!

Ahaaa../Gotcha/Mistakes:

  • If you are deploying containers in private subnet; make sure to have private and public subnets on same Availability Zones.
  • If you plan to have an external facing ALB; make sure to have the ALB in the public subnets of same AZ of the private subnets.
  • NAT and IG to be in same public subnet.
  • Create/assign a Route table to private subnets wherein; NAT is associated to it.
  • Make sure RouteTable, IG and Nat gateways are configured properly; basically you 2 RT’s both having 0.0.0.0/0 as source
    • one for IG in public subnet.
    • another for Nat in a public subnet
  • Before you proceed with ECS; Have a public bastion host. SSH into it and get into the private instance and do a ping for google url in making sure your private instances can access internet via NAT gateway to pull docker images.
  • In my case; my spring boot application was talking to DynamoDB and S3. So make sure to give your container has IAM policy as shown:
  • For ECS to successfully talking to EC2 and create containers and manage it on behalf of you; make sure to provide Task execution role properly as shown:
  • To get container logs of whats happening during application startup and other things; make sure to enable awslogs configuration during taskdefinition ->Add container pageΒ  as shown:
  • ECS service to be different per task-definition i.e taskA(microserviceA) and taskB(microserviceB)
  • Each of the service to be associated with different Target groups.
  • ALB should do path based routing the target groups to access both microservices on the same port.
  • Spring boot application: Do not forget to add actuator dependency for pre-built additional functionalities by spring library. The actuator by defaults add few more endpoints to your application. In this case, health check API/endpoint for the ALB to work properly. Refer for more clarity: https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html
  • Configure /actuator/health path in your ALB target group health check. Otherwise; the target group keeps draining/de-registering your tasks.

Thanks & Regards,

Zen πŸ™‚

AWS IOT Core communication with local client

Dear Reader,

I thought of exploring IOT Core services from AWS. So as part of that; started going through the AWS developer documentation for my simple task to understand and experiment if the local client say a java app on my laptop can communicate with IOT Core based on topic.

AWS documentation gave an initial kick start but i had trouble understanding as an when i continued. Also i was more interested in setting up a simple “thing” with separate topics with an example of code in java.

So i am here putting up in concise, crisp manner without much lengthy explanations.

Step 1 : Create Thing

Under Manage section, click Things and click create a simple thing button. In the screen, you need to enter a name and device type and tags if required. Once done, it shall look at below:

Step 2: Create policies => In AWS IOT Core page; look for secure section as shown:

Click on create button to create a new policy. The add statements section is for creating topics and specific rulessets (allow/deny).
In the Action input; type iot and the dropdown lists various actions available as shown below:

Next you need to assign a resource ARN. Make sure you first select connect action as shown below. Here i have given * in the ARN input field because anybody can connect to this ARN having certificates and key; which ill show later below.
However; If you have specific device/id which only needs to connect to this AWS Iot thing; then you can assign an id.
I have also created additional topics and highlighted the topicname for clarity. The topic names are not standard. You can choose anything you want.

Step 3: Certificates

Under secure section, navigate to certificates and click create button. Then click create certificate button as we are interested in AWS provided keys and certificate. Once the button is clicked, you shall be provided with 3 files to be downloaded as shown:

Make sure to keep it safe and secure. These files are required to be passed onto AWS IOT API’s for communication. You may also need root certificate if you are opting for Websocket connection. But in this article ill talk about HTTPs based connection.
Next click on attach policy and select the policy listed in the next page as shown below:

Step 4:Β  Assign Policy and Certificates to Thing

Navigate back to secure -> Certificates; select the 3 dots on certificate tile already created in previous steps. It shall show context menu as shown:

Select “Attach thing” option and select the thing from the list available. Once that is done, in the same context menu click on Activate item to activate the security for this thing.

Step 5: IOT Endpoint (Https):

Navigate back to Manage -> Things and select the created “Thing” tile. From the left side menu select “Interact” and you can find the endpoint as shown:

Step 6: Client side application (Java)

I have tried the sdk provided here https://github.com/aws/aws-iot-device-sdk-java. There is a next generation of SDK being provided by AWS https://github.com/aws/aws-iot-device-sdk-java-v2 but i did not give it a try at the moment.
The git repo also includes an example application which you can refer or reuse to try the same. You only have to supply certificate and private key file to the SDK Api. The API provides blocking and non-blocking publish/subscribe calls to the IOT Core services.

Thanks to below articles:

Next i shall explore on how to trigger Lambda function and pass on the payload when IOT Core gets device messages.

Thanks

Blogold - Roy Osherove

An Astound expedition

Scott Hanselman's Blog

An Astound expedition

Coding Horror

An Astound expedition

Morning Dew by Alvin Ashcraft

An Astound expedition

Search Msdn

An Astound expedition

MSDN Blogs

An Astound expedition

DOT NET TRICKS

An Astound expedition

Julian Farms

An Astound expedition

MSDN Blogs

An Astound expedition

Harsh Baid

This blog has moved to www.harshbaid.in

r4g54g4r's h4ckl0g

an0th3r h4ck3r's w3blog

Pradeephv4u's Blog

Just another WordPress.com weblog

Abhijit's Blog

for .NET Developers, Architects & Consultants

Talin Orfali Ghazarian

Don't ever change yourself to impress someone, cause they should be impressed that you don't change to please others -- When you are going through something hard and wonder where God is, always remember that the teacher is always quiet during a test --- Unknown

springtimeangel

I, Me and my Mind!

Herro Asia!

Navigate with Categories!

%d bloggers like this: