Powerful Zsh

Powerful Zsh

First you have Zsh, next install Oh My Zsh https://ohmyz.sh/

1
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

Add Powerlevel10k https://github.com/romkatv/powerlevel10k and configure it

1
2
3
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k

p10k configure

Set ZSH_THEME to powerlevel10k in .zshrc

1
ZSH_THEME="powerlevel10k/powerlevel10k"

Add zsh-autosuggestions https://github.com/zsh-users/zsh-autosuggestions and enable in .zshrc

1
2
git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

1
2
3
4
5
...
plugins=(
zsh-autosuggestions
)
...

Install Fig https://fig.io/, an IDE-style autocomplete but for terminal, and configure in .zshrc

1
2
3
4
5
6
7
...
# Fig pre block. Keep at the top of this file.
[[ -f "$HOME/.fig/shell/zshrc.pre.zsh" ]] && . "$HOME/.fig/shell/zshrc.pre.zsh"
...
# Fig post block. Keep at the bottom of this file.
[[ -f "$HOME/.fig/shell/zshrc.post.zsh" ]] && . "$HOME/.fig/shell/zshrc.post.zsh"
...

References

Convert JSON to CSV by using jq

Step by step convert exported JSON data from AWS DynamoDB table into CSV, by using jq.

Export all the data from AWS DynamoDB table at first:

1
πœ† aws --profile production dynamodb scan --table-name tiles > tiles.json

The exported JSON data looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
"Items": [
{
"last_modified_date": {
"S": "2021-12-09T01:15:25.335516"
},
"valid_from": {
"S": "2021-12-09T01:00"
},
"created_date": {
"S": "2021-12-09T01:15:25.334965"
},
"status": {
"S": "PUBLISHED"
},
"valid_to": {
"S": "2022-01-31T23:00"
},
"id": {
"S": "b2c60f43-a81c-4363-a68a-dfe7682182d7"
},
"description": {
"S": "Hit the road Jack!"
},
"title": {
"S": "Novated Lease"
}
},
...
],
"Count": 223,
"ScannedCount": 223,
"ConsumedCapacity": null
}

Extract / transform JSON data:

1
2
3
4
5
6
7
8
9
10
11
12
πœ† cat tiles.json | jq '[.Items[] | { id: .id.S, title: .title.S, description: .description.S, status: .status.S, valid_from: .valid_from.S, valid_to: .valid_to.S }]' > tiles-extracted.json
[
{
"id": "b2c60f43-a81c-4363-a68a-dfe7682182d7",
"title": "Novated Lease",
"description": "Hit the road Jack!",
"status": "PUBLISHED",
"valid_from": "2021-12-09T01:00",
"valid_to": "2022-01-31T23:00"
},
...
]

Convert JSON data into CSV:

1
πœ† cat tiles-extracted.json | jq -r '(.[0] | keys_unsorted) as $keys | $keys, map([.[ $keys[] ]])[] | @csv' > tiles.csv

References

Customise VS Code settings and keybindings with Geddski macros

In IntelliJ IDEA, you can comment a line, the cursor is moved to the next line automatically. This is a very easy way to comment several lines. However, in VS Code, default behaviour is that the cursor stays on the same line.

To copy the behavior of IntelliJ, go with:

  • Install macros author by geddski in VS Code.

  • Edit settings.json and add:

1
2
3
4
5
6
"macros": {
"commentDown": [
"editor.action.commentLine",
"cursorDown"
]
},
  • Edit keybindings.json and add:
1
2
3
4
5
6
7
[
{
"key": "cmd+/",
"command": "macros.commentDown",
"when": "editorTextFocus && !editorReadonly"
}
]

Export and Import AWS DynamoDB data

A simple, straightforward way export and import AWS DynamoDB table’s data with AWS CLI and a few scripts.

At first, export all the data from AWS DynamoDB table:

1
πœ† aws --profile production dynamodb scan --table-name tile-event > tile-event-export.json

Convert a list of items/records (DynamoDB JSON) into individual PutRequest JSON with jq.

1
πœ† cat tile-event-export.json | jq '{"Items": [.Items[] | {PutRequest: {Item: .}}]}' > tile-event-import.json

Transform the data if necessary:

1
πœ† sed 's/tile-images-prod/tile-images-pdev/g' tile-event-import.json > tile-event-import-transformed.json

Split all requests into 25 requests per file, with jq and awk (Note: There are some restriction with AWS DynamoDB batch-write-item request - The BatchWriteItem operation can contain up to 25 individual PutItem and DeleteItem requests and can write up to 16 MB of data. The maximum size of an individual item is 400 KB.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
πœ† cat tile-event-processed.awk
#!/usr/bin/awk -f

NR%25==1 {
x="tile-event-import-processed-"++i".json";
print "{" > x
print " \"tile-event\": [" > x
}
{
printf " %s", $0 > x;
}
NR%25!=0 {
print "," > x
}
NR%25==0 {
print "" > x
print " ]" > x
print "}" > x
}

πœ† jq -c '.Items[]' tile-event-import-transformed.json | ./tile-event-processed.awk

Import all 22 processed JSON files into DynamoDB table:

1
2
3
4
$ for f in tile-event-import-processed-{1..22}.json; do \
echo $f; \
aws --profile development dynamodb batch-write-item --request-items file://$f; \
done

Get and read logs from AWS CloudWatch with saw

For all the people painfully read logs on AWS CloudWatch console, saw is your friend.

Get CloudWatch log groups start with paradise-api:

1
2
πœ† saw groups --profile ap-prod --prefix paradise-api
paradise-api-CloudFormationLogs-mwwmzgYOtbcB

Get last 2 hours logs for paradise-api from CloudWatch, with saw:

1
πœ† saw get --profile ap-prod --start -2h paradise-api-CloudFormationLogs-mwwmzgYOtbcB --prefix docker | jq .log | sed 's/\\\n"$//; s/^"//'

Read environment variables of a process in Linux

When try to get the content of any /proc/PID/environ file in more readable format, you can:

1
2
3
4
/proc/[pid]/environ
This file contains the environment for the process. The entries
are separated by null bytes ('\0'), and there may be a null byte
at the end.

A simple way is to apply xargs -0 -L1 -a on it:

  • -0 - read null-delimited lines,
  • -L1 - read one line per execution of command
  • -a - file read lines from file
1
2
3
4
5
6
7
8
9
10
11
12
13
# ps -aef
10101 3629 3589 0 Apr27 ? 00:00:00 /bin/bash bin/start
10101 3670 3629 0 Apr27 ? 00:00:00 /bin/bash bin/start-tomcat
10101 3671 3670 0 Apr27 ? 00:07:36 /usr/lib/jvm/java-11-amazon-corretto.x86_64/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/

# cat /proc/3629/environ
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=27c44e8a5c7cJAVA_HOME=/usr/lib/jvm/java-11-amazon-corretto.x86_64HOME=/usr/local/tomcat

# xargs -0 -L1 -a /proc/3629/environ
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=27c44e8a5c7c
JAVA_HOME=/usr/lib/jvm/java-11-amazon-corretto.x86_64
HOME=/usr/local/tomcat

AWS KMS decrypt for base64 encoded input

With AWS CLI version 2:

1
2
πœ† aws --version
aws-cli/2.1.17 Python/3.7.4 Darwin/20.3.0 exe/x86_64 prompt/off

Encrypt with AWS KMS key:

1
2
3
4
5
6
7
πœ† aws kms encrypt --profile personal \
--key-id e2695b79-cbe0-4c16-aa5e-b7dbf52df1f9 \
--plaintext "string-to-encrypt" \
--output text \
--query CiphertextBlob \
--cli-binary-format raw-in-base64-out
AQICAHjbJrIPgME ... lILuBSUdA==

Decrypt with AWS KMS key:

1
2
3
4
5
πœ† echo "AQICAHjbJrIPgME ... lILuBSUdA==" | base64 -D | \
aws kms decrypt --profile personal \
--ciphertext-blob fileb:///dev/stdin \
--output text \
--query Plaintext | base64 -D

Reference

A Modern Architecture Application

RAD (Rapid Application Development) of a Serverless application β€œNotification Service” on modern technologies, e.g. AWS CDK & SAM, AWS Step Functions, TypeScript, VS Code, Open API Top Down Design and Test Driven Development, in order to rapidly build a prototype, or a POC, verify and test some technologies and approaches.

Request Handler => Step Functions (orchestration for Lambda functions, represents a single centralized executable business process, outsources low level operations like retry / exception catch and handle. Another choice is SNS) => Service Providers

Have experienced of Terraform, Serverless, AWS SAM … now this time based on code over configuration principle, what you get is flexibility, predictability and more control. You focus on code you tell the tools what steps it has to complete directly. At the end of day, it is a simple matter of separation of concerns and single responsibility principle.

β€’ VS Code for API Spec editing

β€’ Postman API, Environment and Mock server, for QA team, then switch to real service in DEV/TEST environment

1
πœ† npm run openapi

β€’ openapi-generator generates model classes; typescript-json-validator generates JSON Schema and validator

1
2
πœ† openapi-generator generate -g typescript-node -i Notification\ API\ openapi.json -o Notification\ API\ generated
πœ† npx typescript-json-validator notificationRequest.ts NotificationRequest

β€’ Onboard on Kong / API Manager, https://konghq.com/kong/

β€’ CDK, is based on CloudFormation but abstract layer on the top of it. It can generates CloudFormation template file template.yaml

1
πœ† cdk synth --no-staging > template.yaml

β€’ Demo of local run and debug lambda, with background TSC watch process

1
2
3
4
πœ† npm run watch

πœ† sam local invoke RequestNotification9F9F3C31 -e samples/api-gateway-notification-event.json
πœ† sam local invoke RequestNotification9F9F3C31 -e samples/api-gateway-notification-event.json -d 5858

Data validation to make data integrity unbreachable will take a lot time.

ajv framework and performance benchmark, https://github.com/ebdrup/json-schema-benchmark

β€’ Code lint with eslint and prettier and automatically correction

β€’ Code commit rule enforcement

β€’ Change code and deploy AWS stack by CDK

1
πœ† cdk deploy --require-approval never --profile dev-cicd

β€’ Behavior Driven Test Framework Jest, https://github.com/facebook/jest, 2x / 3x faster than Karma, with code coverage, easy mocking

1
πœ† npm t

β€’ Automatically generate application changelog and release notes

1
πœ† npm run release:minor

β€’ Automatically generate application document

1
πœ† npm run docs

β€’ AWS resources created by CDK

β€’ Not Mono Repo app, which multiple projects all under one giant Repo

β€’ ONE AWS Layers put all dependent NPM libs and shared code into; size of Lambda functions, readability

β€’ AWS EventBridge tro trigger and send event to Request Handler, for scheduling task

β€’ Health Check, with Service Monitoring Dashboard, verify dependencies at the endpoints, keep Lambda warming up

1
πœ† curl https://c81234xdae8w1a9.execute-api.ap-southeast-2.amazonaws.com/health

Cloud computing and Serverless architecture let developers in fast lane for Application Development. Right now, there are so many low hanging fruit to pick up.

As developers, we should not always think about our comfort zone, we need to think about people who take over your work, think about BAU team to support the application. The codebase is not about you, but about the value that your code brings to others, and the organization that you work for.