AWS KMS decrypt for base64 encoded input

With AWS CLI version 2:

1
2
𝜆 aws --version
aws-cli/2.1.17 Python/3.7.4 Darwin/20.3.0 exe/x86_64 prompt/off

Encrypt with AWS KMS key:

1
2
3
4
5
6
7
𝜆 aws kms encrypt --profile personal \
--key-id e2695b79-cbe0-4c16-aa5e-b7dbf52df1f9 \
--plaintext "string-to-encrypt" \
--output text \
--query CiphertextBlob \
--cli-binary-format raw-in-base64-out
AQICAHjbJrIPgME ... lILuBSUdA==

Decrypt with AWS KMS key:

1
2
3
4
5
𝜆 echo "AQICAHjbJrIPgME ... lILuBSUdA==" | base64 -D | \
aws kms decrypt --profile personal \
--ciphertext-blob fileb:///dev/stdin \
--output text \
--query Plaintext | base64 -D

Reference

A Modern Architecture Application

RAD (Rapid Application Development) of a Serverless application “Notification Service” on modern technologies, e.g. AWS CDK & SAM, AWS Step Functions, TypeScript, VS Code, Open API Top Down Design and Test Driven Development, in order to rapidly build a prototype, or a POC, verify and test some technologies and approaches.

Request Handler => Step Functions (orchestration for Lambda functions, represents a single centralized executable business process, outsources low level operations like retry / exception catch and handle. Another choice is SNS) => Service Providers

Have experienced of Terraform, Serverless, AWS SAM … now this time based on code over configuration principle, what you get is flexibility, predictability and more control. You focus on code you tell the tools what steps it has to complete directly. At the end of day, it is a simple matter of separation of concerns and single responsibility principle.

VS Code for API Spec editing

Postman API, Environment and Mock server, for QA team, then switch to real service in DEV/TEST environment

1
𝜆 npm run openapi

openapi-generator generates model classes; typescript-json-validator generates JSON Schema and validator

1
2
𝜆 openapi-generator generate -g typescript-node -i Notification\ API\ openapi.json -o Notification\ API\ generated
𝜆 npx typescript-json-validator notificationRequest.ts NotificationRequest

• Onboard on Kong / API Manager, https://konghq.com/kong/

CDK, is based on CloudFormation but abstract layer on the top of it. It can generates CloudFormation template file template.yaml

1
𝜆 cdk synth --no-staging > template.yaml

• Demo of local run and debug lambda, with background TSC watch process

1
2
3
4
𝜆 npm run watch

𝜆 sam local invoke RequestNotification9F9F3C31 -e samples/api-gateway-notification-event.json
𝜆 sam local invoke RequestNotification9F9F3C31 -e samples/api-gateway-notification-event.json -d 5858

Data validation to make data integrity unbreachable will take a lot time.

ajv framework and performance benchmark, https://github.com/ebdrup/json-schema-benchmark

• Code lint with eslint and prettier and automatically correction

• Code commit rule enforcement

• Change code and deploy AWS stack by CDK

1
𝜆 cdk deploy --require-approval never --profile dev-cicd

• Behavior Driven Test Framework Jest, https://github.com/facebook/jest, 2x / 3x faster than Karma, with code coverage, easy mocking

1
𝜆 npm t

• Automatically generate application changelog and release notes

1
𝜆 npm run release:minor

• Automatically generate application document

1
𝜆 npm run docs

• AWS resources created by CDK

• Not Mono Repo app, which multiple projects all under one giant Repo

• ONE AWS Layers put all dependent NPM libs and shared code into; size of Lambda functions, readability

AWS EventBridge tro trigger and send event to Request Handler, for scheduling task

• Health Check, with Service Monitoring Dashboard, verify dependencies at the endpoints, keep Lambda warming up

1
𝜆 curl https://c81234xdae8w1a9.execute-api.ap-southeast-2.amazonaws.com/health

Cloud computing and Serverless architecture let developers in fast lane for Application Development. Right now, there are so many low hanging fruit to pick up.

As developers, we should not always think about our comfort zone, we need to think about people who take over your work, think about BAU team to support the application. The codebase is not about you, but about the value that your code brings to others, and the organization that you work for.

Bring back MagSafe

My first published video, created by Apple Final Cut Pro, on YouTube for official channel title Bring back MagSafe regard to solution that bring one of the most innovative design from Apple, back to MacBook Pro, iPad … and Android phones https://www.youtube.com/watch?v=yvkJR4Y0FK0

Risk Management for CI/CD processes

Consider a full development and deployment cycle, and the potential risks involved during the different stages in CDP (CI / Continuous Integration, CD / Continuous Delivery, CDP / Continuous Deployment):

  • Code
Role Details
Stakeholders Individual Developer
Pair Programming Mentor
DBA
Security Team
Failure Points Logic flaws
Security flaws
Code standards issues
Safeguards Test Driven Development
Red/Green/Refactor
Linting tools
Testing Docker containers
Pair programming
Query analysis
Static code analysis
  • Commit
Role Details
Stakeholders Security Team Member for sign-off
Engineering Team Lead for sign-off
Failure Points Force pushes
Merge conflicts
Safeguards Master branch protections
3 member sign-off before master merge
Commit hooks
  • Test
Role Details
Stakeholders Individual Developer
QA Team
Failure Points Broken tests
Stale tests
False positive tests
Safeguards Weekly failure testing triage meeting to catch broken tests
Daily cron runs of test suite against mock prod environment
  • Deployment
Role Details
Stakeholders SysOps Team
Individual Developers
Support Team
Customers
Failure Points Broken deployments
Dropped customer traffic
Safeguards Blue/Green deployment
Traffic re-routing
Pre deployment spare instance warmup
Communicate out to support in order to verify proper staffing levels
  • Runtime
Role Details
Stakeholders Security Team
SysOps Team
Engineering Teams
Support Team
Customers
Failure Points High resource usage
Slow queries
Malicious actors
MProvider downtime
Safeguards Communicate out to support for new feature awareness and appropriate categories for issues regarding the component
System resource alarms for various metrics and slow DB log alerts
Instant maintenance page switchover capabilities
Status page on redundant providers
Application firewalls
Database replicas

AWS CloudWatch Metrics Example

AWS CloudWatch Metrics

The interface of Metrics in AWS CloudWatch console:

AWS CloudWatch - Metrics

The URL:

1
https://ap-southeast-2.console.aws.amazon.com/cloudwatch/home?region=ap-southeast-2#metricsV2:graph=~(metrics~(~(~'AWS*2fRoute53Resolver~'InboundQueryVolume)~(~'.~'OutboundQueryVolume))~view~'timeSeries~stacked~false~region~'ap-southeast-2~stat~'Sum~period~86400~start~'-P28D~end~'P0D);query=~'*7bAWS*2fRoute53Resolver*7d

Metrics source:

1
2
3
4
5
6
7
8
9
10
11
12
{
"metrics": [
[ "AWS/Route53Resolver", "InboundQueryVolume" ],
[ ".", "OutboundQueryVolume" ]
],
"view": "timeSeries",
"stacked": false,
"region": "ap-southeast-2",
"stat": "Sum",
"period": 86400,
"title": "Test"
}

Creating AWS Lambda with AWS SAM

This is a simple Lambda with REST API and SNS enabled. Firstly, have a look the Nodejs script:

AWS SAM template yaml file:

Generate AWS CloudFormation yaml file and package / zip / create an artefact (need to create AWS S3 bucket hello-world-tub in advance):

1
2
3
4
5
6
𝜆 sam package --profile personal --template-file template.yml --output-template-file cloudFormation.yml --s3-bucket hello-sam-tub
Uploading to 7431f83ac979bfccc26980049807e595 1461 / 1461.0 (100.00%)

Successfully packaged artifacts and wrote output template to file cloudFormation.yml.
Execute the following command to deploy the packaged template
sam deploy --template-file /Users/terrence/Projects/hello-sam/cloudFormation.yml --stack-name <YOUR STACK NAME>

Can also create artefact file with zip command, and upload zip file into AWS S3 bucket:

1
𝜆 zip hello-sam.zip README.md index.js template.yml

What AWS CloudFormation yaml file looks like:

Deploy application’s AWS CloudFormation stack with AWS SAM command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
𝜆 sam deploy --profile personal --template-file cloudFormation.yml --stack-name hello-sam --capabilities CAPABILITY_IAM

Deploying with following values
===============================
Stack name : hello-sam
Region : None
Confirm changeset : False
Deployment s3 bucket : None
Capabilities : ["CAPABILITY_IAM"]
Parameter overrides : {}

Initiating deployment
=====================

Waiting for changeset to be created..

CloudFormation stack changeset
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Operation LogicalResourceId ResourceType
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Add HelloWorldFunctionHelloWorldApiPermissionProd AWS::Lambda::Permission
+ Add HelloWorldFunctionRole AWS::IAM::Role
+ Add HelloWorldFunction AWS::Lambda::Function
+ Add HelloWorldTopic AWS::SNS::Topic
+ Add ServerlessRestApiDeployment79454cea13 AWS::ApiGateway::Deployment
+ Add ServerlessRestApiProdStage AWS::ApiGateway::Stage
+ Add ServerlessRestApi AWS::ApiGateway::RestApi
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:ap-southeast-2:123456789012:changeSet/samcli-deploy1581737165/48e53ff2-1b50-45d8-bbfd-97652f20d967


2020-02-15 14:26:10 - Waiting for stack create/update to complete

CloudFormation events from changeset
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS AWS::SNS::Topic HelloWorldTopic Resource creation Initiated
CREATE_COMPLETE AWS::SNS::Topic HelloWorldTopic -
CREATE_IN_PROGRESS AWS::IAM::Role HelloWorldFunctionRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role HelloWorldFunctionRole -
CREATE_IN_PROGRESS AWS::Lambda::Function HelloWorldFunction Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function HelloWorldFunction -
CREATE_IN_PROGRESS AWS::ApiGateway::RestApi ServerlessRestApi Resource creation Initiated
CREATE_COMPLETE AWS::ApiGateway::RestApi ServerlessRestApi -
CREATE_IN_PROGRESS AWS::Lambda::Permission HelloWorldFunctionHelloWorldApiPermissionProd Resource creation Initiated
CREATE_IN_PROGRESS AWS::ApiGateway::Deployment ServerlessRestApiDeployment79454cea13 Resource creation Initiated
CREATE_COMPLETE AWS::ApiGateway::Deployment ServerlessRestApiDeployment79454cea13 -
CREATE_IN_PROGRESS AWS::ApiGateway::Stage ServerlessRestApiProdStage Resource creation Initiated
CREATE_COMPLETE AWS::ApiGateway::Stage ServerlessRestApiProdStage -
CREATE_COMPLETE AWS::Lambda::Permission HelloWorldFunctionHelloWorldApiPermissionProd -
CREATE_COMPLETE AWS::CloudFormation::Stack hello-sam -
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - hello-sam in None

NOTE: The --capabilities CAPABILITY_IAM option is necessary to authorise your stack to create IAM roles, which SAM applications do by default.

After application deployed, user subscribes notification will receive email titled - AWS Notification - Subscription Confirmation After confirmation, user will receive an email every time API is invoked.

Now log on AWS Console, have a look the resources this Lambda application used in CloudFormation, S3 Bucket, Lambda, IAM, SNS, CloudWatch, API Gateway.

After this Lambda application successfully deployed into AWS, you will receive an email asking whether you want to subscribe the SNS topic. You can also unsubscribe the SNS topic. You can manually test Lambda function in AWS Console.

References

2019: Year in Review

2019 年是过去十年中最差的一年,但或许会是未来十年中最好的一年。

Year 2019 - 01

图 1:这不是传说,这是 reality.

1 月 24 日,有自媒体发出证监会人事变动的“虚假”消息,引起关注。中国证监会有关人士指出,自媒体不是法外之地,对于不负责任的虚假信息传播,应当依法依规处理。

1 月 26 日,中国官方宣布负责股市监管的中国证监会主席换人。三年前在中国股市遭遇“灾难性重创”的背景下接班的刘士余结束任期,其职务由中国另一国有银行工商银行董事长易会满接任。

“谣言”就是“遥遥领先的预言”。

老刘的这张配图,是在 salute 中国证监会,还是中国股民?

“问君能有几多愁,恰似满仓中石油。如若当初没割肉,而今想来愁更愁。” 这是 8 月 26 日新闻头条。中石油股价收于 6.02 元,刷新历史新低。中石油股价最高时 48 元,上市 12 年,蒸发了 7 万亿,从市值一哥到“套牢第一股”。7 万亿相当于跌掉 1 个苹果,或 2 个俄罗斯股市的总市值。

“我有无数次骑上大牛,结果半路被吓跑了。 然后看着人家一飞冲天,想大哭都没有眼泪。 我真想把自己的贱手给砍掉。”

“我有无数次踩上大屎,结果硬挺在那几年。 然后等着分红一毛无有,想上访都没有胆子。 我真想把自己的脚也给砍掉。”

让中国股民怀念的牛市,都像是怀念的初恋一样,那消散已久的牛味仿佛做了一场春梦般的美妙。

Year 2019 - 02

图 2:Hayne’s Report.

2 月 The Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry Final Report, lead by Keith Hayne published. 导致土澳经济支柱产业四大银行中三个的 CEO 先后在今年中被迫辞职。

Year 2019 - 03

图 3:超乎想像的宇宙。

4 月,人类有史以来获得的首张黑洞照片。位于处女座中的 M87 的超大质量黑洞距离我们大约 5500 万光年。其质量之大约合 66 亿个太阳。

Year 2019 - 04

图 4:I believe in miracles.

大选前一个月,各项民意调查都 tip Labor win the election. Sportsbets back Labor win at odds of $1.16, Coalition $5.80. Labor 竞选团队在全国各地拉票讲演时也是踌躇满志,势在必赢。星期六竞选日早上,“主流” TV & newspapers 还预测 Labor’s landslide win。但选举最终在 Coalition 戏剧性的大逆转中结束。

民意调查机构不得不承认自己吹嘘的 Machine Learning / AI 只是骗人的小把戏。Sportsbet 赔了 $5.2m on its costly error。“主流”媒体也不得不扇自己的脸,radical political idiosyncrasy and elitism bias 根本做不了三个代表。

Monash University professor Andrew Markus said Australians usually nominated jobs, the economy and financial security as their top concerns and may have recoiled from Labor’s sweeping plans for tax revenue increases.

“If there’s a danger that your agenda challenges those economic factors, you’re on pretty rocky ground.”

Now quiet Australians are heard loud and clear.

The swing to the Liberals suggested voters were sceptical of policies to raise $56 billion from changes to dividend rules, $32 billion from negative gearing and $30 billion over a decade from superannuation.

“What one person receives without working for, another person must work for without receiving.” Here is the wisdom from Adrian Rogers.

Wasn’t it Labor shadow treasurer Mr. Chris Bowen who said “if you do t like us taking away your franking credits then don’t vote for Labor” before the election?

Well, Mr. Bowen, thank you for your invitation!

Year 2019 - 05

图 5:Eluid Kipchoge.

10 月 12 日,34 岁的肯尼亚人,马拉松世界纪录保持者 Eluid Kipchoge,在“世界音乐之都”奥地利首都维也纳,向全程马拉松 Breaking 2 “破二”的宏伟目标发起个人职业生涯的第二度冲击。这是人类跨越马拉松新里程碑的历史性一刻。马拉松突破人类 2 小时大关。

1 小时 59 分 40 秒 In perspective, 想象一下,Eluid Kipchoge just runs every 100 meters in 17s, for two hours. 17 frigging seconds!

Year 2019 - 06

图 6:Reserve Bank Australia cuts interest rates to historic low.

Central bank is literally behind the curve. More ominously, it’s an indication that asset bubbles are poised to burst, just like the Fed’s first interest rate cut warned directly ahead of both the tech bust and GFC.

现在的一个经济幽灵就是”债务”。下一个经济危机很有可能就是债务危机。

政府债,企业债,个人债,大量的负债,超过了借款者自身偿还能力而引发的债务危机,金融海啸。先财务困境,再经济由不稳定转至崩溃,中产变成破产,从而造成社会的大动荡。

Year 2019 - 07

图 7:Dow 28000.

In November, another historical moment, Dow closes above 28,000. From 27,000 to 28,000, get there just in 90 trading days.

全球的股市可以说气势如虹。美国股市的市值和 GDP 比值已经超过了 150%,超越了 2000 年互联网泡沫和 2007 年房地产泡沫顶峰时期的估值。

房地产市场同样是高歌猛进。美国的房地已经超过了 2007 年顶峰 20% 的水平了。不管从什么指标衡量,一批指标都显示当前资本市场的估值都已经超过了 2007 年,这是个巨大的泡沫。

有人预测明年 2020 年将是次贷危机以来最难的一年。但因为似乎好像每经过一年都是过去 10 年中最难的一年,全球经济再也没有回到过快速增长的快车道上,所以明年才是刚开始,未来十年才是最艰难的。

目前债务杠杆超过了 250%。全球各国央行以零利率和负利率维持。在这种情形下,必然导致股市和房地产市场泡沫。

一旦这两个泡沫破灭,实体经济也必然会受到影响。从 400 多年的经济史中可以得出的结论就是市场规律可以推迟,但从未缺席。

Year 2019 - Home grown farm

图 8:Home grown farm.

以农补工。经济不景气,投资回报低,肉类,生蔬和水果价格居高临下的环境中依靠 home grown,家中盆栽的杏,苹果和橄榄弥补减少的收入,扶持增大的开销。

Year 2019 - Year in sports

图 9:Year in sports - Personal Best.

创记录的一年。跑动距离(包括跑步和网球)达到 1300+ 公里,相当于从北京跑到上海。Climbing 14,652m,相当于攀登了 1.7 个珠穆朗玛峰的高度。

不因岁月裹步不前,不因磨难放弃梦想,不因极限停止前进。A journey of a thousand miles begins with a single step. Keep running!

AWS EKS for Fargate, with eksctl

AWS EKS, with eksctl

Second try with AWS EKS on Fargate. This time with eksctl.

Create EKS cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
𝜆 eksctl create cluster --name sandpit --version 1.14 --region us-east-2 --fargate
[ℹ] eksctl version 0.11.1
[ℹ] using region us-east-2
[ℹ] setting availability zones to [us-east-2b us-east-2a us-east-2c]
[ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-east-2a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "sandpit" in "us-east-2" region with Fargate profile
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=sandpit'
[ℹ] CloudWatch logging will not be enabled for cluster "sandpit" in "us-east-2"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2 --cluster=sandpit'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "sandpit" in "us-east-2"
[ℹ] 1 task: { create cluster control plane "sandpit" }
[ℹ] building cluster stack "eksctl-sandpit-cluster"
[ℹ] deploying stack "eksctl-sandpit-cluster"
[✔] all EKS cluster resources for "sandpit" have been created
[✔] saved kubeconfig as "/Users/terrence/.kube/config"
[ℹ] creating Fargate profile "fp-default" on EKS cluster "sandpit"
[ℹ] created Fargate profile "fp-default" on EKS cluster "sandpit"
[ℹ] "coredns" is now schedulable onto Fargate
[ℹ] "coredns" is now scheduled onto Fargate
[ℹ] "coredns" pods are now scheduled onto Fargate
[ℹ] kubectl command should work with "/Users/terrence/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "sandpit" in "us-east-2" region is ready

AWS EKS on Fargate, with eksctl - Cluster

Create and add EKS mansged node group:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
𝜆 eksctl create nodegroup --cluster sandpit --name workers --node-type t3a.medium --ssh-access --ssh-public-key aws-us-key --managed
[ℹ] eksctl version 0.11.1
[ℹ] using region us-east-2
[ℹ] will use version 1.14 for new nodegroup(s) based on control plane version
[ℹ] using EC2 key pair %!!(MISSING)q(*string=<nil>)
[ℹ] 1 nodegroup (workers) was included (based on the include/exclude rules)
[ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "sandpit"
[ℹ] 1 task: { 1 task: { create managed nodegroup "workers" } }
[ℹ] building managed nodegroup stack "eksctl-sandpit-nodegroup-workers"
[ℹ] deploying stack "eksctl-sandpit-nodegroup-workers"
[✔] created 0 nodegroup(s) in cluster "sandpit"
[ℹ] nodegroup "workers" has 2 node(s)
[ℹ] node "ip-192-168-47-175.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-87-98.us-east-2.compute.internal" is ready
[ℹ] waiting for at least 2 node(s) to become ready in "workers"
[ℹ] nodegroup "workers" has 2 node(s)
[ℹ] node "ip-192-168-47-175.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-87-98.us-east-2.compute.internal" is ready
[✔] created 1 managed nodegroup(s) in cluster "sandpit"
[ℹ] checking security group configuration for all nodegroups
[ℹ] all nodegroups have up-to-date configuration

Kubernetes Dashboard

Install Kubernetes Dashboard in Kubernetes cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
𝜆 kubectl get services  --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 70m
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 70m
kube-system metrics-server ClusterIP 10.100.142.106 <none> 443/TCP 14m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.91.78 <none> 8000/TCP 11m
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.100.75.0 <none> 443/TCP 11m

𝜆 kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-cnzrv 1/1 Running 0 40m
kube-system aws-node-m9tjp 1/1 Running 0 40m
kube-system coredns-7f5cccffc-h44mz 1/1 Running 0 65m
kube-system coredns-7f5cccffc-hmx7g 1/1 Running 0 65m
kube-system kube-proxy-7kn62 1/1 Running 0 40m
kube-system kube-proxy-g57ph 1/1 Running 0 40m
kube-system metrics-server-7fcf9cc98b-ftl4k 1/1 Running 0 14m
kubernetes-dashboard dashboard-metrics-scraper-677768c755-mxlmc 1/1 Running 0 11m
kubernetes-dashboard kubernetes-dashboard-995fd6fb4-xqcj5 1/1 Running 0 11m

Connect Kubernetes Dashboard via proxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
𝜆 cat .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/terrence/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
- cluster:
certificate-authority-data: LS0tLS1CRUd ... tLS0tLQo=
server: https://0559DE89F43B8766B56C7FD066C6C50F.yl4.us-east-2.eks.amazonaws.com
name: sandpit.us-east-2.eksctl.io
contexts:
- context:
cluster: sandpit.us-east-2.eksctl.io
user: ADMMiaoT@sandpit.us-east-2.eksctl.io
name: ADMMiaoT@sandpit.us-east-2.eksctl.io
- context:
cluster: minikube
user: minikube
name: minikube
current-context: ADMMiaoT@sandpit.us-east-2.eksctl.io
kind: Config
preferences: {}
users:
- name: ADMMiaoT@sandpit.us-east-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- sandpit
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: paradise-dev
- name: minikube
user:
client-certificate: /Users/terrence/.minikube/client.crt
client-key: /Users/terrence/.minikube/client.key

𝜆 kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}'
eks-admin-token-s2gf5

𝜆 kubectl -n kube-system describe secret eks-admin-token-s2gf5
Name: eks-admin-token-s2gf5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: eks-admin
kubernetes.io/service-account.uid: fa3cf514-18bc-11ea-bbdd-0a4cd5e8dc70

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIs ... hpY8upQlA2q40g

𝜆 kubectl proxy

Visit URL http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login

Choose Token, paste the token output from the previous command into the Token field, and choose SIGN IN.

AWS EKS on Fargate, with eksctl - Nodes

With AWS managed nodes, on Node EC2 Instance:

AWS EKS on Fargate, with eksctl - EC2 Instance

First Docker application

Deploy first Docker application react-typescript, from Docker Hub https://hub.docker.com/r/jtech/react-typescript, into Kubernetes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
𝜆 kubectl run react-typescript --image=jtech/react-typescript --port 3000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/react-typescript created

𝜆 kubectl describe deployments
Name: react-typescript
Namespace: default
CreationTimestamp: Mon, 09 Dec 2019 14:56:09 +1100
Labels: run=react-typescript
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=react-typescript
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=react-typescript
Containers:
react-typescript:
Image: jtech/react-typescript
Port: 3000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: react-typescript-867c948446 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 71s deployment-controller Scaled up replica set react-typescript-867c948446 to 1

𝜆 kubectl describe pods react-typescript
Name: react-typescript-867c948446-qtvrp
Namespace: default
Priority: 2000001000
PriorityClassName: system-node-critical
Node: fargate-ip-192-168-183-250.us-east-2.compute.internal/192.168.183.250
Start Time: Mon, 09 Dec 2019 14:56:59 +1100
Labels: eks.amazonaws.com/fargate-profile=fp-default
pod-template-hash=867c948446
run=react-typescript
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.183.250
Controlled By: ReplicaSet/react-typescript-867c948446
Containers:
react-typescript:
Container ID: containerd://2ea5f1ea4fb731eb844f0e267581e9e188d29ab7a639b7b8ca50c83cfb12b4c3
Image: jtech/react-typescript
Image ID: docker.io/jtech/react-typescript@sha256:0951fe4d9a24390123c7aa23612c8cdf1d8191a9d8e7d3cbc8bb4d8d763e0ce5
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 09 Dec 2019 14:57:28 +1100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-knpqq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-knpqq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-knpqq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 76s kubelet, fargate-ip-192-168-183-250.us-east-2.compute.internal Pulling image "jtech/react-typescript"
Normal Pulled 49s kubelet, fargate-ip-192-168-183-250.us-east-2.compute.internal Successfully pulled image "jtech/react-typescript"
Normal Created 49s kubelet, fargate-ip-192-168-183-250.us-east-2.compute.internal Created container react-typescript
Normal Started 49s kubelet, fargate-ip-192-168-183-250.us-east-2.compute.internal Started container react-typescript

Expose service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
𝜆 kubectl expose deployment react-typescript --type="NodePort"
service/react-typescript exposed

𝜆 kubectl describe services react-typescript
Name: react-typescript
Namespace: default
Labels: run=react-typescript
Annotations: <none>
Selector: run=react-typescript
Type: NodePort
IP: 10.100.54.37
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 31799/TCP
Endpoints: 192.168.183.250:3000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

𝜆 kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 46h
default react-typescript NodePort 10.100.54.37 <none> 3000:31799/TCP 4m55s
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 46h
kube-system metrics-server ClusterIP 10.100.142.106 <none> 443/TCP 45h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.91.78 <none> 8000/TCP 45h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.100.75.0 <none> 443/TCP 45h

Run kubectl proxy and connect to react-typscript application on URL: http://localhost:8001/api/v1/namespaces/default/services/http:react-typescript:3000/proxy/

AWS EKS on Fargate, with eksctl - React Typescript

References

AWS EKS for Fargate

AWS EKS

After AWS EKS for Fargate annouced in Re:Invent 2019 - Amazon EKS on AWS Fargate Now Generally Available, I have a quick spin.

General configuration:

AWS EKS on Fargate - Configuration

AWS EKS on Fargate - Configuration

Fargate profile configuration:

AWS EKS on Fargate - Profile

Fargate roles:

AWS EKS on Fargate - Roles

CustomEKSRole role has AmazonEKSClusterPolicy and AmazonEKSServicePolicy.

CustomEKSFargatePodExecutionRole role has AmazonEKSFargatePodExecutionRolePolicy, and Trust relationships:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "eks-fargate-pods.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

CustomEKSWorkerNodeRole role has AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, and Trust relationships:

1
2
3
4
5
6
7
8
9
10
11
12
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

Namespace for Fargate profile Pod Selectors is default.

Subnets for Fargate, including private subnets (subnet without Internet Gateway):

AWS EKS on Fargate - Subnets

References