YouTube-Style Face Detect - Crop and Blur using python and OpenCV

This post will focus on the core concepts in image processing. These areas will act as the building blocks for more intricate image manipulations in a later post. Becoming familiar with these characteristics of using Python and OpenCV, we will then be able to jump around to the different concepts more easily. 

Face processing is a hot topic in artificial intelligence because a lot of information can be automatically extracted from faces using computer vision algorithms. 

The face plays an important role in visual communication because a great deal of non-verbal information, such as identity, intent, and emotion, can be extracted from human faces.

Face processing is a really interesting topic for computer vision learners because it touches on different areas of expertise, such as object detection, image processing, and landmark detection or object tracking.




A Simple Explanation of Regularization in Machine Learning

In this post, we are going to look into regularization and also implement it from scratch in python (Part02). We will see with example and nice visuals to understand it in a much better way. We already know about the Linear regression where this is used.


In-depth Explained Simple Linear Regression from Scratch - Part 1

In my opinion, most Machine Learning tutorials aren’t beginner-friendly enough. It very math-heavy or it doesn't help you with the algorithms behind it.
In this post, we are going to do the simple Linear Regression from scratch. We will see the mathematical intuition behind it and we write the code from scratch + test it and I'm super excited to get started!!




Overview Guide To Tensorflow 2.x with Examples


The most concise and complete explanation of what TensorFlow is can be found at (https://www.tensorflow.org/) and it highlights every important part of the library.

TensorFlow is an open-source software library for high-performance numerical computation.

Its flexible architecture allows easy deployment of computation across a range of platforms (CPUs, GPUs, and TPUs), from desktops to clusters of servers, to mobile and edge devices.

Originally developed by researchers and engineers from the Google Brain team within Google's AI organization, it comes with strong support for machine learning and deep learning,and therefore the flexible numerical computation core is employed across many other scientific domains.

In this blog post, we are going to see the basics for TensorFlow 2.x. This can be used as getting started guide to learn and understand it.

I'm not going to cover the installation/Setup of the Jupyter part as this can be found online easily.




Create a CRUD Restful Service API using Flask + Mysql [in 7 minutes!]

In this article, we will learn how to build Simple Restful API with flask and Mysql that have capabilities to create, read, update, and delete data from the database.

Flask being a microframework provides the flexibility of the data source for applications and also provides library support for interacting with different kinds of data sources. There are libraries to connect to SQL- and NoSQL-based databases in Flask.


Error and Exception Handling using Try/catch in powershell



One of the most important components for creating PowerShell scripts is error and exception handling.

I've personally made mistakes while writing scripts without proper exceptions and trying to figure out why it got terminated.😡 

Error and exception handling is often a forgotten component of scripting because it's common to feel that the code should always execute linearly and in an implicit fashion.





This is due to the common practice of taking the small scripts and using them as starting points for more complex scripts. 

The more complex you build your scripts, the higher the probability of failure 🠝 and unexpected results.



In this post, you will learn the following:
  • Types of error
  • Different ways to handle Exceptions
  • Error and exception handling with parameters
    • Different Actions
  • Error and exception handling with Try/Catch


PowerShell has two different types of errors which are terminating and non-terminating.

Terminating errors will stop the script from executing further commands. 

The non-terminating errors will call the write-error cmdlet, print an error to the screen, and continue.


Error and Exception Handling 

PowerShell offers several different options to achieve error and exception handling.

The most popular method used to catch non-terminating errors is bypassing the error and exception handling parameters while executing PowerShell cmdlets. 

You can then call the error variable and execute other actions based on the contents of the $error variable.


The PowerShell parameters that handle error and exceptions are -WarningAction and ErrorAction

When an issue occurs with your script, the PowerShell CLR will reference the -ErrorAction and -WarningAction arguments to determine what the next step for the script is.


There are five actions that are supported within PowerShell. 

  • The SilentlyContinue action will suppress the error and warning information, populate the error variables, and continue.
  • The Ignore action will suppress the warning and error message and not populate any specified variables.
  • The Continue action will write to the screen the warning and error information and attempt to continue with the script.
  • The Stop action will write the warning and error information stop the execution of the script.
  •  The Inquire action will prompt the end-user if they want to HaltSuspendAccept the Error, or Accept All Errors.


By default, PowerShell is set to Continue, however, you can set the $errorActionPreference and $warningActionPreference (Global variables) to different values for different default actions.


We will see one example of cmdlet error handling.


Function TestExample($tesparam) { 
Get-service $tesparam –ErrorAction SilentlyContinue –ErrorVariable err

    If ($err) {
      Write-host "Error! Error Details: $err"
        return
    }
  
    Write-host "Successfully Retrieved Service Information for $svcName. "
}
TestExample "Windows Update"
Write-host "" 
TestExample "Does Not Exist"

####################################################
Status   Name               DisplayName                           
------   ----               -----------                           
Stopped  wuauserv           Windows Update                        
Successfully Retrieved Service Information for . 

Error! Error Details: Cannot find any service with service name 'Does Not Exist'.


If the $err variable has data in it or is implied true, the script will write to the console Error! Error Details: $err followed by return, which will exit out of the function. If the $err variable doesn't have any error details, it will proceed to write to the console.


Handling error with try/Catch/Finally

One of the more popular error and exception handling techniques is leveraging Try/Catch methodology.

The Try/Catch block is used for handling terminating errors and has a very simple structure. You first use the Try { } section of code and then use Catch { } to catch any errors and perform actions based on the errors.



try
{
    $items = Get-Item -Path C:\Does\Not\Exist, C:\Windows, $env:APPDATA -ErrorAction Stop
}
catch [System.Management.Automation.ItemNotFoundException]
{
    # Specific catch block for the exception type
    # PSItem contains the error record, and TargetObject may contain the actual object raising the error
    Write-Host ('Could not find folder {0}' -f $PSItem.TargetObject)
}
finally
{
    # Regardless of whether an error occurred or not, the optional
    # finally block is always executed.
    Write-Host 'Always executed'
}

You can find out which type of exception occurred by examining its type, using $Error[0].Exception.GetType().FullName.



One of the best practice techniques for error and exception handling is to combine the use of the Try/Catch block and cmdlet parameters. This is due to PowerShell being able to gracefully handle terminating and non-terminating error scenarios. 


For instance, if you execute a line of code that throws a warning message but doesn't generate a terminating error, you can catch the warning and perform actions based on that warning.



Try {
    Get-process "Doesn't Exist" –ErrorAction SilentlyContinue     –ErrorVariable err
}
Catch {
  Write-host "Try/Catch Exception Details: $_"
}
if ($err) {
  Write-host "Cmdlet Error Handling Error Details: $err"
}

#############################################
Cmdlet Error Handling Error Details: Cannot find a process with the name "Doesn't Exist". Verify the process name and call the cmdlet again. 



When you execute the script, you see that the Catch method doesn't catch the error message from the get-service  the cmdlet. This is due to the error being a non-terminating error, and so it doesn't invoke the block


When you run the script, however, the cmdlet properly handles the error and places the error details in the $err variable. 

Quick Summary: 
  • There are two types of errors in PowerShell: terminating and nonterminating.
  • Error records are written directly to the default output.
  • Error records are rich objects.
  • The $error variable stores the last 256 errors (by default).
  • You can specify a specific variable for errors by using the -ErrorVariable parameter.
  • $? stores a Boolean value indicating the execution status of the last command.
  • $ErrorActionPreference and the -ErrorAction parameter can be used to control the action taken if an error occurs.
  • Terminating errors and exceptions can be managed by the trap statement or the try/catch/finally statements (preferred).
Happy Coding!


Working with XML Files in Powershell [Parsing]


In the last post, we worked with CSV types of files. The next type of file we're going to look at is Extensible markup language(XML). They are used for various reasons, for example, storing properties data that can be used for configuration and data storage.





XML is one of the basic file extensions and consists of opening and closing tags..



<config>
 <pctype>
        <type name="Desktop PC" value="D"></type>
        <type name="Notebook" value="N"></type>
        <type name="Tablet PC" value="T"></type>
    </pctype>
    <logdir>E:Logs</logdir>
    <logfile>SCCMLogs.txt</logfile>
</config>

Parameters can be added both in a nested fashion and inline. By using the nested approach, you will also have a visual recognition of which objects are on the same level through indentation:


We will start with an example of storing and loading configuration dataπŸ’—




PS C:\WINDOWS\system32> $XMlcontent= @'
<config>
 <pctype>
        <type name="Desktop PC" value="D"></type>
        <type name="Notebook" value="N"></type>
        <type name="Tablet PC" value="T"></type>
    </pctype>
    <logdir>E:Logs</logdir>
    <logfile>SCCMLogs.txt</logfile>
</config>
'@


Now let's try to save the file and then read the content of the XML file as an XML object. To do this we perform the following steps.



PS C:\WINDOWS\system32> #Path where the config file is being saved
$configPath = 'c:\users\config.xml'

#Saving config file
$XMlcontent | Set-Content $configPath

#Loading xml as config
[XML] $configXml = Get-Content -Path $configPath -ErrorAction 'Stop'



Now let's check what we have got in that XML object i.e $configXml



PS C:\WINDOWS\system32> $configXml

xml                            config
---                            ------
version="1.0" standalone="yes" config




We saw how the XML file can be loaded easily with PowerShell and using the configuration data. The implementation is very straightforward and involves casting the loaded content with [XML] into an XML file. This allows us to directly work with IntelliSense and find our configuration properties easily.


We have the object we can now try various tasks like parsing the XML,reading the top nodes, Filtering or searching tags/values. Let's understand with an example.

Let's try to fetch all the PCType values.



PS C:\WINDOWS\system32> $configXml.config.PCType.Type

Name       Value
----       -----
Desktop PC D    
Notebook   N    
Tablet PC  T    


I also learned about the grid view which outputs the results in a screen in tabular format.



PS C:\WINDOWS\system32> $configXml.config.PCType.Type | Out-GridView




Filtering the only type of notebook can be done by applying the where-object. we will see in the below example how we can achieve the same.




PS C:\WINDOWS\system32> $NotebookCode =  ($configXml.Config.PCType.Type | Where-Object {$_.Name -like 'Notebook'}).Value

PS C:\WINDOWS\system32> $NotebookCode
N



Also, we can run some condition along within the object to see if some value exists



PS C:\WINDOWS\system32> if ($configXml.Config.Logdir -ne $null)
{
    $LogDir = $configXml.Config.Logdir
    $LogFile = $configXml.Config.LogFile
}

PS C:\WINDOWS\system32> $LogDir
E:\Logs\

PS C:\WINDOWS\system32> $LogFile
SCCMLogs.txt



It is possible to use filtering and searching to find configuration properties. Especially when you are working with very complex configuration files, this technique might come in handy.

In the next exampleπŸ™, we will take a more programmatic approach to find the values of an XML file by using XPath filters.

XPath filters allow us to find and retrieve objects in bigger, more complex XML files.



[xml]$xml = Get-Content 'C:\path\to\your.xml'
$xml.selectNodes('//person') | select Name




Just as regular expressions are the standard way to interact with plain text, XPath is the standard way to interact with XML. Because of that, XPath is something you are likely to run across in your travels. Several cmdlets support XPath queries: Select-Xml, Get-WinEvent, and more.

Here we have an example of working with Xpath

PS C:\WINDOWS\system32> $Xpath = "/config/PCType"

PS C:\WINDOWS\system32> Select-Xml -Path $Path -XPath $Xpath | Select-Object -ExcludeProperty Node

Node   Path               Pattern       
----   ----               -------       
PCType C:\temp\config.xml /config/PCType



To make use of XPath filters, you can work with the Select-Xml cmdlet and add the XPath filter as a string. We used a simple filter here to retrieve the objects for a specific structure with "/Types/Type" , and to retrieve all values for all included types recursively with "//config".


Conclusion:
In the past, XML has been widely used as an option for storing application data. XML has some limitations though. It begins to struggle as your XML files increase in size (>1 MB), or if many people try to work with them simultaneously. But it still remains powerful when we are working with configuration or dynamic properties handling.

Happy Coding πŸ‘

Managing CSV Files using Import/Export-CSV in Powershell


In the PowerShell series, we are looking into working with files in PowerShell. The first types of files we are covering which are used are CSV(comma separated values) file types. We are going to look into two important cmdlets import-csv/export-csv which are widely used while working with CSV.






We start with the CSV file extension, as this is the most basic one. We will make use of the previous example, where we stored the currently running processes to file:


#Defining file for export
$exportedFile = 'C:\temp\exportedProcesses.csv'

#Exporting as CSV - basic
Get-Process | Export-Csv $exportedFile

#Opening the file
psedit $exportedFile

By default, Export-Csv will write a comma-delimited file using ASCII encoding and will completely overwrite any file using the same name.

Export-Csv may be used to add lines to an existing file using the Append parameter. When the Append parameter is used, the input object must have each of the fields listed in the CSV header or an error will be thrown unless the Force parameter is used.

After running this simple example, you will have the opened CSV file in front of you, which consists of all the processes and each value, separated by commas. And that is what CSV actually stands for comma-separated values. The benefit of working with CSV files is that you will get table-like custom objects returned, which can easily be filtered. This file type makes sense, especially for simple data objects.

Export-Csv can be instructed to exclude this header using the NoTypeInformation parameter:


Get-Process | Export-Csv processes.csv -NoTypeInformation 

Importing is very straightforward.


#Importing CSV file
$data = Import-Csv $exportedFile

#Showing content
$data | Out-GridView


Comma-Separated Values (CSV) files are plain text. Applications such as Microsoft Excel can work with CSV files without changing the file format, although the advanced features Excel has cannot be saved to a CSV file.

By default, Import-Csv expects the input to have a header row, to be comma-delimited, and to use ASCII file encoding. If any of these items are different, the command parameters may be used. For example, a tab may be set as the delimiter.


Import-Csv TabDelimitedFile.tsv -Delimiter `t 

A tick followed by t (`t) is used to represent the tab character in PowerShell.

Data imported using Import-Csv will always be formatted as a string. If Import-Csv is used to read a file containing the following text, each of the numbers will be treated as a string.



#Showing its type
$data | Get-Member # TypeName: CSV:System.Diagnostics.Process
$data[0].GetType() # PSCustomObject
$data.GetType()    # System.Array

It's interesting to see here what type is being retrieved after you import the CSV file. The Get-Member cmdlet on the $data object itself shows that it is a CSV file, and the exported objects are of type System.Diagnostics.Process. But, after taking a dedicated look at the first object and at the type of the container, you will recognize that the imported object cannot be used as a process anymore. It has become a PSCustomObject. Nevertheless, it is still an improvement over exporting it as a plain string. You can easily import it and use it as a simple data store.


The next big benefit when working with CSV files is that you can make them editable with Microsoft Excel. To achieve this, you just need to change the delimiter from comma (,) to semicolon (;), as this is the default delimiter for Excel files. You can use the dedicated -Delimiter flag for this task.


#Exporting as CSV with specified delimiter ';'
Get-Process | Export-Csv C:\temp\exportedProcesses.csv -Delimiter ';'

#Importing the data
$data = Import-Csv C:\temp\exportedProcesses.csv -Delimiter ';'

#Showing the data
$data | Out-GridView
Be careful though here, as this is a culture-specific-behavior. To avoid the problems with the different cultures, you can use the flag -UseCulture.
Now, editing with Excel is possible. To demonstrate the power of PowerShell, we will now open up the file with Excel via PowerShell and the use of the ComObject of Excel itself.

#Create ComObject for Excel
$excel = New-Object -ComObject Excel.Application

#Make it visible
$excel.Visible = $true

#Open the CSV file
$excel.Workbooks.Open($exportedFile)
You can try to open up a CSV file that was exported with the comma and the semicolon delimiter to see the difference between the two approaches by yourself.

Conclusion:

We already covered a section to understand working with CSV. We covered the cmdlets export-csv to while trying to export files as CSV. They are mostly used while reporting which can be directly viewed using excel. I have personally used Export-CSV and import-CSV while working with the azure platform which we will cover some other days in detail.

How to work with PowerShell Files Read/Write using Set-Content & Get-Content

In the PowerShell script writing series, we are working on some of the helpful areas to write powershell scripts. In this series we would continue covering some of the important topics on working with PowerShell read files, write files,folder,subfolders.
Another area where you should become very confident is working with files (read & write), as you will need to work with them very frequently.

First, we will take a look at the basics of working with files by retrieving and writing files and the content of the files. This can be achieved with the Get-Content and Set-Content/Out-File cmdlets.

First of all, we will take a dedicated look at how you can export content to a file:

#Storing working location
$exportedProcessesPath = 'C:\temp\test.txt'

#Write processes table to file and show the result in Terminal with the -PassThru flag
Get-Process | Set-Content -Path $exportedProcessesPath 

#Open file to verify
psedit $exportedProcessesPath

#retrieving processes and exporting them to file
Get-Process | Out-File $exportedProcessesPath 

#Open file to verify
psedit $exportedProcessesPath #or use notepad to open file

#retrieving processes and exporting them to file with Out-String
Get-Process | Out-String | Set-Content $exportedProcessesPath -PassThru

#Open file to verify
psedit $exportedProcessesPath #or use notepad to open file

There is a small difference between exporting content with the two aforementioned cmdlets. Set-Content will call the ToString() method of each object, whereas Out-File will call the Out-String method first and then write the result to file.

You will have a similar result when using Out-File or Set-Content in combination with Out-String:

Sometimes, it may also be necessary to export the content with a specified encoding. There is an additional flag available to accomplish this task, as shown in the following example:


#retrieving process and exporting them to file with encoding
Get-Process | Out-String | Set-Content $exportedProcessesPath -Encoding UTF8
Get-Process | Out-String | Set-Content $exportedProcessesPath -Encoding Byte

Reading File content in powershell

Retrieving the content works very similarly to the Get-Content cmdlet. One downside of the cmdlet is that it will load the complete file into the cache. Depending on the file size, this may take very long and even become unstable. Here is an easy example to load the content into a variable:

Because of this issue, it may become necessary to only retrieve a dedicated number of lines. There are two flags available for this, as follows:

#The last five lines
Get-Content -Path $exportedProcessesPath -Tail 5

#The first five lines
Get-Content -Path $exportedProcessesPath -TotalCount 5


Improving performance of Get-content

In addition, you can also specify how many lines of content are sent through the pipeline at a time. The default value for the ReadCount flag is 0, and a value of 1 would send all content at once. This parameter directly affects the total time for the operation, and can decrease the time significantly for larger files:


#Get-Content with ReadCount, because of perfomance-improvement.
$data = (Get-Content -Path $exportedProcessesPath -ReadCount 50)

#Retrieving data as one large string
$data = Get-Content -Path $exportedProcessesPath -Raw


Working with Files,Folder,Sub-folders

The next step when working with files and folders is searching for specific ones. This can be easily achieved with the Get-ChildItem command for the specific PSDrive:


#Simple Subfolders
Get-ChildItem -Path 'C:\temp' -Directory

#Recurse
Get-ChildItem -Path 'C:\Windows' -Directory -Recurse 


#Simple Subfiles
Get-ChildItem -Path 'C:\temp' -File


#Recurse
Get-ChildItem -Path 'C:\Windows' -File -Recurse 

As you can see, you can easily work with the -Directory and -File flags to define the outcome. But you will normally not use such simple queries, as you want to filter the result in a dedicated way.


The next, more complex, example shows a recursive search for *.txt files. We are taking four different approaches to search for those file types and will compare their runtimes:


All methods retrieved the same amount of line? $($countWhere -eq $countWhereObject -eq $countInclude -eq $countCmd)

The Slow Approach!


#Define a location where txt files are included
$Dir = 'C:\temp\'

#Filtering with .Where()
$timeWhere = (Measure-Command {(Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue).Where({$_.Extension -like '*txt*'})}).TotalSeconds

$countWhere = $((Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue).Where({$_.Extension -like '*txt*'})).Count

#Filtering with Where-Object
$timeWhereObject = (Measure-Command {(Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue) | Where-Object {$_.Extension -like '*txt*'}}).TotalSeconds

$countWhereObject = $((Get-ChildItem $Dir -Recurse -Force -ErrorAction SilentlyContinue) | Where-Object {$_.Extension -like '*txt*'}).Count


The first two approaches use Get-ChildItem with filtering afterwards, which is always the slowest approach.



#Filtering with Include
$timeInclude = (Measure-Command {Get-ChildItem -Path "$($Dir)*" -Include *.txt* -Recurse}).TotalSeconds

$countInclude = $(Get-ChildItem -Path "$($Dir)*" -Include *.txt* -Recurse).Count

The third approach uses filtering within the Get-ChildItem cmdlet, using the -Include flag. This is obviously much faster than the first two approaches.


#Show all results
Write-Host @"
Filtering with .Where(): $timeWhere
Filtering with Where-Object: $timeWhereObject
Filtering with Include: $timeInclude




You will also need to create new files and folders and combine paths very frequently, which is shown in the following snippet. The subdirectories of a folder are being gathered, and one archive folder will be created underneath each one:

#user folder
$UserFolders = Get-ChildItem 'c:\users\' -Directory

#Creating archives in each subfolder
foreach ($userFolder in $UserFolders)
{
    New-Item -Path (Join-Path $userFolder.FullName ('{0}_Archive' -f $userFolder.BaseName)) -ItemType Directory -WhatIf
}


Keep in mind that, due to the PSDrives, you can simply work with the basic cmdlets such as New-Item. We made use of the -WhatIf flag to just take a look at what would have been executed. If you're not sure that your construct is working as desired, just add the flag and execute it once to see its outcome.
A best practice to combine paths is to always use Join-Path to avoid problems on different OSes or with different PSDrives. Typical errors are that you forget to add the delimiter character or you add it twice. This approach will avoid any problems and always add one delimiter.

The next typical use case you will need to know is how to retrieve file and folder sizes.


The following example retrieves the size of a single folder, optionally displaying the size for each subfolder as well.


It is written as a function to be dynamically extendable. This might be good practice for you use, in order to understand and make use of the contents of previous chapters. You can try to extend this function with additional properties and by adding functionality to it.


.SYNOPSIS
Retrieves folder size.
.DESCRIPTION
Retrieves folder size of a dedicated path or all subfolders of the dedicated path.
.EXAMPLE
Get-FolderSize -Path c:\temp\ -ShowSubFolders | Format-List
.INPUTS
Path
.OUTPUTS
Path and Sizes
.NOTES
folder size example
#>
function Get-FolderSize {
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
$Path,
[ValidateSet("KB","MB","GB")]
$Units = "MB",
[Switch] $ShowSubFolders = $false
)
if((Test-Path $Path) -and (Get-Item $Path).PSIsContainer )
{
if ($ShowSubFolders)
{
$subFolders = Get-ChildItem $Path -Directory
foreach ($subFolder in $subFolders)
{
$Measure = Get-ChildItem $subFolder.FullName -Recurse -Force -ErrorAction SilentlyContinue | Measure-Object -Property Length -Sum
$Sum = $Measure.Sum / "1$Units"
[PSCustomObject]@{
"Path" = $subFolder
"Size($Units)" = [Math]::Round($Sum,2)
}
}
}
else
{
$Measure = Get-ChildItem $Path -Recurse -Force -ErrorAction SilentlyContinue | Measure-Object -Property Length -Sum
$Sum = $Measure.Sum / "1$Units"
[PSCustomObject]@{
"Path" = $Path
"Size($Units)" = [Math]::Round($Sum,2)
}
}
}
}


Next, we will dive into specific file types, as they hold some benefits in storing, retrieving, and writing information to file.