Run Microsoft Playwright – Jasmine – Smoke tests in VS Code with the test explorer


When you want to run Jasmine tests with VS Code and the test explorer, make sure the location of the jasmine.json is set correctly in:

File > Preferences > Settings > Workspace > Extensions > Jasmine Test Explorer > Jasmine Explorer: Config

Then open the “Output > Jasmine Tests” to see loading errors if you don’t see the tests show up in the test explorer.

How to fix: UseExceptionHandler not working / executed on validation errors in ASP .NET core.


The following post explains exactly how to globally handle validation errors thrown by the ASP .NET core framework.

Centralized exception handling and request validation in ASP.NET Core


Validation errors will not trigger the “UseExceptionHandler”.

If you want to change the response for validation errors you will have to configure the “ApiBehaviorOptions”


    .AddJsonOptions(option => option.SerializerSettings.DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Local)



services.Configure<ApiBehaviorOptions>(options =>


    options.InvalidModelStateResponseFactory = ctx => new ValidationProblemDetailsResult(_logger);


p style=”background: #1e1e1e”>});



Read “” for more details.

How to show logging in karma jasmine unit tests.


In Karma v5.0.9 server the “console.log” statements, inside the spec tests as well as in your web application code, are logged to the terminal running the karma server by default.

If you want to turn this off, you can set the “client => captureConsole” to false.

You can also change the log level by setting the “logLevel” setting e.g. “logLevel: config.LOG_INFO” to log info messages and above.




Get raw content response from an xhr request copied from Google Chrome in PowerShell


Google Chrome allows you to copy an executed xhr request and replay it in PowerShell:

  • Open the developer tools (F12)

  • Open the network tab

  • Trigger the netwerk request by executing the functionality in you web application

  • Right click the request in the network tab, then “Copy” and then “Copy as PowerShell”


Now paste this in a PowerShell prompt and you should get the same result as in the browser.


If the request returns an exception (http error code), then you can use the following code to get more information on the error:

Just replace the text “<< paste request from google Chrome here >>” with the copied xhr request.


try {

  $webResponse = << paste request from google Chrome here >>


  Write-Host “Response: $($webResponse)”


catch {

  Write-Host “ErrorDetails: $($_.ErrorDetails)”

  Write-Host “ExceptionResponse: $($_.Exception.Response)”


p style=”background: #1e1e1e”>}



2020-08-27 Learned Today


How to use the PlayStation 4 – Dual Shock 4 controller on a Windows 10 laptop by using the built-in default Bluetooth receiver with Fortnite

The battle between Apple and Epic games let too my kids not being able to play the last Fortnite season on their Apple (iOS) devices.

They decided they wanted to play the new Fortnite season on a good old Windows 10 laptop, but just like on the Apple (iOS) devices, they wanted to use a PlayStation 4 – Dual Shock 4 controller.

Out of the box I could connect the controller just by pressing the Press the “PlayStation” button and the “Share” button on the controller at the same time and hold them down.

Just make sure you are not in reach of a PlayStation.


But after that the controller was connected to the Windows 10 laptop, but Fortnite did not recognize the controller.

After a short search on the internet I found the solution at:

I downloaded this software / driver from Github and installed it.

After rebooting and connecting the PlayStation – Dual Shock 4 controller, before starting the Fortnite game.

Fortnite recognized the controller and all was well .

My kids now use a Windows 10 laptop for playing the last Fornite season.

Data flows and good DDD architecture in .NET Core and Entity Framework (EF)


    # Introduction

From the blog post at and code at 

I have learned how to architect a .NET Core project by using DDD principals.

The main thing you will have to realize, when you come from a layered architecture, is that the logical flow of data, does not correspond with the project references in Visual Studio.



    # Logical dataflow

    Controllers (MyApp.Web.csproj) => Domain Services (MyApp.Domain.csproj) => Persistence services (MyApp.Persistence.csproj)


    Controller data flows

    – On a HTTP GET a .NET controller will,

      OPTIONAL – Receive primitive types or data transfer objects (DTO) from the client

      OPTIONAL – Map data transfer objects (DTO) to domain models

      Call domain service(s) by using DI

      Receive domain models

      Map domain models to data transfer objects (DTO)

      Return data transfer objects (DTO) to the client.

    – On a HTTP POST a .NET controller will,

      OPTIONAL – Receive primitive types or data transfer objects (DTO) from the client

      OPTIONAL – Map data transfer objects (DTO) to domain models

      Call domain service(s) by using DI

      OPTIONAL – Receive domain models

      OPTIONAL – Map domain models to data transfer objects (DTO)

      OPTIONAL – Return data transfer objects (DTO) to the client.


    Domain Service data flows

    – Receive primitive types or domain models

    – Act on domain models

    – OPTIONAL – Call persistence services with primitive types or domain models, by using DI

    – OPTIONAL – Receive primitive types or domain models

    – OPTIONAL – Return primitive types or domain models


    Persistence services data flows

    – OPTIONAL – Receive primitive types or domain models

    – OPTIONAL – Map primitive types or domain models to persistence models

    – OPTIONAL – Act on persistence models

    – Call an ORM or other persistence layer framework

    – OPTIONAL – Map persistence models to primitive types or domain models

    – OPTIONAL – Return primitive types or domain models




    # Project references

    Controllers (MyApp.Web.csproj) => Persistence services (MyApp.Persistence.csproj) => Domain Services (MyApp.Domain.csproj) => MyApp.Infrastructure.csproj (Logging, Monitoring, Security, and other code that can be reused between projects).


    By putting the Interfaces for the ‘persistence services’ inside the ‘domain’ project, the implementation inside the ‘persistence’ project and referencing the ‘domain’ project from the ‘persistence’ project and using DI inside the ‘web’ project sta.

    We can make the ‘domain’ project totaly independent from other custom assemblies and make the controllers only use types from the ‘domain’ project.

    Note: All data coming from the ‘persistence’ project and all data send to the ‘persistence’ project, should be primitive types or ‘domain’ models.

    It is a good practice to make the Entity Framework types internal, so they cannot be accidentaly used outside the MyApp.Persistence.dll.

    In some cases, you will have multiple persistence projects, e.g. when you must interact with a database and other micro services.

    – MyApp.Persistence.Database.csproj

    – MyApp.Persistence.MyOtherMicroService.csproj




    # Other Resources

    A good description of passing data between layers and how you should do that, is described here:


    EF and DDD


p style=”background: #1e1e1e”>

2020-08-13 Learned Today


How to correctly add a foreign key in sql server

When you script a table that has a foreign key, SQL Server Management studio will generate the following code:


ALTER TABLE [Production].[ProductCostHistory] WITH CHECK ADD

CONSTRAINT [FK_ProductCostHistory_Product_ProductID] FOREIGN KEY([ProductID])

REFERENCES [Production].[Product] ([ProductID])


followed immediately by :


ALTER TABLE [Production].[ProductCostHistory] CHECK CONSTRAINT




Why the the last 3 lines?

The last 3 lines enable an existing foreign key, but does not check if the data in the table is consistent with the foreign key.

If you want to do that you would have to write:





Yes, this seems strange “CHECK CHECK”, but this is how the documentation states it should be written:


I think the lines are generated for safety (sometimes foreign keys are disabled en not enabled again, this will cause the query optimizer to not use these untrusted foreign keys), but I think this should be done in a monitoring scripts not in a deployment script for a new version of the database.


Will dropping a table remove constraints in SQL Server?

When you drop a table in SQL Server, all primary keys, foreign keys, defaults etc are removed from the database.