极简之道 - The interface 人与机器的思想交流

Why 60%?

Why 60%

Why 60%

Why 60%

Why 60%

GH60

GH60

GH60 可编程键盘 (http://blog.komar.be/projects/gh60-programmable-keyboard/), it’s Poker 2 键盘的 rip-off。开放式公板设计。电路板中国制造。三周前,在 AliExpress (https://www.aliexpress.com/item/Customized-DIY-GH60-Case-Shell-PCB-Plate-Switches-LED-Kit-60-Mechanical-Keyboard-Satan-Poker2-GH/32651474350.html) 下的订单。

Cherry MX Switch

Cherry MX Brown Switch

德国 Cherry 工厂的茶轴。敲起键盘来极有段落,层次感。

iQunix Lambo

iQunix Lambo 60%

拆掉原装的塑料键盘壳。换装 iQunix Lambo (https://www.aliexpress.com/item/Iqunix-lambo-60-mechanical-keyboard-anode-alumina-shell-base-gh60-poker2/32677061753.html) 铝制外壳。

GMK 3Run Keycap Set

GMK 3Run Keycap Set

最后再装上德国 GMK 工厂造的 3Run ABS keycaps 后 (https://www.massdrop.com/buy/gmk-3run-keycap-set) ,一个拥有自己 signature,体现个性,品味的机械键盘诞生了。

极简之道 / Simplicity

Ant Colony Optimization (ACO)

Ant Colony Optimization (ACO) for the the Traveling Salesman Problem (TSP).

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.

蚁群算法是一种用来寻找优化路径的概率型算法。它由 Marco Dorigo 于 1992 年在他的博士论文中提出,其灵感来源于蚂蚁在寻找食物过程中发现路径的行为。这种算法具有分布计算、信息正反馈和启发式搜索的特征,本质上是进化算法中的一种启发式全局优化算法。

蚁群系统 (Ant System 或 Ant Colony System) 是由意大利学者 Dorigo、Maniezzo 等人于 20 世纪 90 年代首先提出来的。他们在研究蚂蚁觅食的过程中,发现单个蚂蚁的行为比较简单,但是蚁群整体却可以体现一些智能的行为。例如蚁群可以在不同的环境下,寻找最短到达食物源的路径。这是因为蚁群内的蚂蚁可以通过某种信息机制实现信息的传递。后又经进一步研究发现,蚂蚁会在其经过的路径上释放一种可以称之为“信息素”的物质,蚁群内的蚂蚁对“信息素”具有感知能力,它们会沿着“信息素”浓度较高路径行走,而每只路过的蚂蚁都会在路上留下“信息素”,这就形成一种类似正反馈的机制,这样经过一段时间后,整个蚁群就会沿着最短路径到达食物源了。

将蚁群算法应用于解决优化问题的基本思路为:用蚂蚁的行走路径表示待优化问题的可行解,整个蚂蚁群体的所有路径构成待优化问题的解空间。路径较短的蚂蚁释放的信息素量较多,随着时间的推进,较短的路径上累积的信息素浓度逐渐增高,选择该路径的蚂蚁个数也愈来愈多。最终,整个蚂蚁会在正反馈的作用下集中到最佳的路径上,此时对应的便是待优化问题的最优解。

蚂蚁找到最短路径要归功于信息素和环境,假设有两条路可从蚁窝通向食物,开始时两条路上的蚂蚁数量差不多:当蚂蚁到达终点之后会立即返回,距离短的路上的蚂蚁往返一次时间短,重复频率快,在单位时间里往返蚂蚁的数目就多,留下的信息素也多,会吸引更多蚂蚁过来,会留下更多信息素。而距离长的路正相反,因此越来越多的蚂蚁聚集到最短路径上来。

蚂蚁具有的智能行为得益于其简单行为规则,该规则让其具有多样性和正反馈。在觅食时,多样性使蚂蚁不会走进死胡同而无限循环,是一种创新能力;正反馈使优良信息保存下来,是一种学习强化能力。两者的巧妙结合使智能行为涌现,如果多样性过剩,系统过于活跃,会导致过多的随机运动,陷入混沌状态;如果多样性不够,正反馈过强,会导致僵化,当环境变化时蚁群不能相应调整。

与其他优化算法相比,蚁群算法具有以下几个特点:

(1) 采用正反馈机制,使得搜索过程不断收敛,最终逼近最优解。
(2) 每个个体可以通过释放信息素来改变周围的环境,且每个个体能够感知周围环境的实时变化,个体间通过环境进行间接地通讯。
(3) 搜索过程采用分布式计算方式,多个个体同时进行并行计算,大大提高了算法的计算能力和运行效率。
(4) 启发式的概率搜索方式不容易陷入局部最优,易于寻找到全局最优解。

该算法应用于其他组合优化问题,如旅行商问题、指派问题、Job Shop 调度问题、车辆路由问题、图着色问题和网络路由问题等。最近几年,该算法在网络路由中的应用受到越来越多学者的关注,并提出了一些新的基于蚂蚁算法的路由算法。同传统的路由算法相比较,该算法在网络路由中具有信息分布式性、动态性、随机性和异步性等特点,而这些特点正好能满足网络路由的需要。

Visualization

A visual demo of Ant Colony Optimisation written in Javascript (ES6):

Visual demo of Ant Colony Optimisation

Another visual demo of Ant Colony Optimisation:

Visual demo of Ant Colony Optimisation

References

Spring Data - powerful and succinct abstraction

Database tier definition

Database tables, indexes and foreign keys defined in Liquibase configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
databaseChangeLog:
- changeSet:
id: 1
author: Terrence Miao
changes:
- createTable:
tableName: draft_order
columns:
- column:
name: id
type: int
autoIncrement: true
constraints:
primaryKey: true
nullable: false
- column:
name: c_number
type: varchar(32)
constraints:
nullable: false
- column:
name: source_time_in_ms
type: bigint
constraints:
nullable: false
- column:
name: source_item_id
type: varchar(255)
constraints:
nullable: false
- column:
name: shipment
type: json
constraints:
nullable: false
- column:
name: shipment_id
type: varchar(255)
constraints:
nullable: true
- column:
name: quantity
type: int
constraints:
nullable: false
- column:
name: source_system
type: varchar(255)
constraints:
nullable: false
- column:
name: status
type: varchar(32)
constraints:
nullable: false
- createIndex:
columns:
- column:
name: source_item_id
indexName: idx_source_item_id
tableName: draft_order
unique: false
- createIndex:
columns:
- column:
name: c_number
- column:
name: source_item_id
indexName: idx_c_number_source_item_id
tableName: draft_order
unique: true
- createTable:
tableName: draft_order_combined
columns:
- column:
name: id
type: int
autoIncrement: true
constraints:
primaryKey: true
nullable: false
- column:
name: combined_id
type: varchar(64)
constraints:
nullable: false
- column:
name: draft_order_id
type: int
constraints:
nullable: false
- addForeignKeyConstraint:
baseColumnNames: draft_order_id
baseTableName: draft_order_combined
constraintName: fk_draft_order_combined_draft_order
onDelete: CASCADE
onUpdate: RESTRICT
referencedColumnNames: id
referencedTableName: draft_order
- changeSet:
id: 2
author: Terrence Miao
changes:
- addColumn:
columns:
- column:
# For MySQL 5.7.x above, the first TIMESTAMP column in the table gets current timestamp as the default value, likely. So
# if an INSERT or UPDATE without supplying a value, the column will get the current timestamp. Any subsequent TIMESTAMP
# columns should have a default value explicitly defined. If you have two TIMESTAMP columns and if you don't specify a
# default value for the second column, you will get this error while trying to create the table:
# ERROR 1067 (42000): Invalid default value for 'COLUMN_NAME'
name: date_created
type: timestamp(3)
constraints:
nullable: false
- column:
name: date_updated
type: timestamp(3)
defaultValueComputed: LOCALTIMESTAMP(3)
constraints:
nullable: false
tableName: draft_order

DAO definition

  • Draft Order
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@Entity
@Table(name = "draft_order")
public class DraftOrder implements Serializable {

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;

@Column(name = "c_number")
private String cNumber;

@Column(name = "source_time_in_ms")
private Long sourceTimeInMs;

@Column(name = "source_item_id")
private String sourceItemId;

@Column(name = "shipment", columnDefinition = "json")
private String shipment;

@Column(name = "shipment_id")
private String shipmentId;

@Column(name = "quantity")
private Integer quantity;

@Column(name = "source_system")
private String sourceSystem;

@Column(name = "status")
private String status;
}
  • Draft Order Combined
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@Entity
@Table(name = "draft_order_combined")
public class DraftOrderCombined implements Serializable {

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;

@Column(name = "combined_id")
private String combinedId;

@OneToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "draft_order_id")
private DraftOrder draftOrder;
}
  • An middle Aggregation Object
1
2
3
4
5
6
7
8
9
10
11
12
public class CombinedIdSourceTimeInMs {

private Long counter;
private String combinedId;
private Long sourceTimeInMs;

public CombinedIdSourceTimeInMs(Long counter, String combinedId, Long sourceTimeInMs) {
this.counter = counter;
this.combinedId = combinedId;
this.sourceTimeInMs = sourceTimeInMs;
}
}

CRUD Repository definition

  • DraftOrderRepository
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface DraftOrderRepository extends CrudRepository<DraftOrder, Integer> {

List<DraftOrder> findByCNumberAndStatusOrderBySourceTimeInMsDesc(String cNumber, String status, Pageable pageable);

List<DraftOrder> findByCNumberAndSourceItemIdIn(String cNumber, List<String> sourceItemIds);

DraftOrder findByCNumberAndSourceItemId(String cNumber, String sourceItemId);

List<DraftOrder> findByShipmentIdInAndStatusAndSourceSystem(List<String> shipmentIds, String status, String sourceSystem);

List<DraftOrder> findByCNumberAndId(String cNumber, Integer id);

Long countByCNumberAndStatus(String cNumber, String status);
}
  • DraftOrderCombinedRepository
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public interface DraftOrderCombinedRepository extends CrudRepository<DraftOrderCombined, Integer> {

String FIND_QUERY =
"SELECT new org.paradise.data.dao.CombinedIdSourceTimeInMs"
+ "(count(doc) as counter, doc.combinedId as combinedId, min(doc.draftOrder.sourceTimeInMs) as sourceTimeInMs) "
+ " FROM DraftOrderCombined doc WHERE doc.draftOrder.cNumber = :cNumber AND doc.draftOrder.status = :status "
+ " GROUP BY combinedId "
+ " ORDER BY sourceTimeInMs DESC";

String COUNT_QUERY = "SELECT count(1) FROM "
+ "(SELECT count(1) FROM DraftOrderCombined doc WHERE doc.draftOrder.cNumber = :cNumber AND doc.draftOrder.status = :status"
+ " GROUP BY doc.combinedId)";

@Query(value = FIND_QUERY, countQuery = COUNT_QUERY)
List<CombinedIdSourceTimeInMs> countPerCombinedIdAndSourceTimeInMs(@Param("cNumber") String cNumber,
@Param("status") String status, Pageable pageable);

List<DraftOrderCombined> findByCombinedIdOrderByDraftOrderDaoSourceTimeInMsDesc(String combinedId);
}

References

SQL script generates random data and insert into MySQL database

1
DROP PROCEDURE InsertRandomRecords;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
DELIMITER $$
CREATE PROCEDURE InsertRandomRecords(IN NumRows INT)
BEGIN
DECLARE i INT;
SET i = 1;
START TRANSACTION;
WHILE i <= NumRows DO
INSERT INTO draftorders.draft_order (c_number, source_time_in_ms, source_item_id, shipment, shipment_id, quantity, source_system, status)
VALUES ('C01234567890', RAND()*1000000000, CONCAT('randomSourceRef-', UUID_SHORT()),
'{"to": {"name": "T T", "lines": ["Lvl 100", "123 smith st"], "phone": "0356567567", "state": "VIC", "suburb": "Greensborough", "postcode": "3088", "business_name": "In debt"}, "from": {"name": "Carl Block", "lines": ["1341 Dandenong Road"], "state": "VIC", "suburb": "Geelong", "postcode": "3220"}, "items": [{"width": "10", "height": "10", "length": "10", "weight": "10", "product_id": "3D85", "item_reference": "blocked", "authority_to_leave": true, "allow_partial_delivery": true, "contains_dangerous_goods": true}], "shipment_reference": "My second shipment ref", "customer_reference_1": "cr1234", "customer_reference_2": "cr5678"}',
UUID(), 1, 'EBAY', ELT(1 + FLOOR(RAND()*3), 'DRAFT', 'READY_TO_SHIP', 'SHIPPED'));
SET i = i + 1;
END WHILE;
COMMIT;
END$$
DELIMITER ;

To generate 1,000,000 draft orders:

1
CALL InsertRandomRecords(1000000);

Set up and run AWS Lambda 'hello' function with serverless

serverless

With latest Node.js 6.x.x installed, then install serverless globally:

1
$ npm install serverless -g

AWS Lambda

Create a AWS Lambda skeleton project with serverless:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ mkdir serverless-example && cd $_

$ sls create -t aws-nodejs
Serverless: Generating boilerplate...
_______ __
| _ .-----.----.--.--.-----.----| .-----.-----.-----.
| |___| -__| _| | | -__| _| | -__|__ --|__ --|
|____ |_____|__| \___/|_____|__| |__|_____|_____|_____|
| | | The Serverless Application Framework
| | serverless.com, v1.7.0
-------'

Serverless: Successfully generated boilerplate for template: "aws-nodejs"
Serverless: NOTE: Please update the "service" property in serverless.yml with your service name
  • Policies set up for Lambda function

For AWS user “ec2-user”, now need to have some policies with permissions to let “serverless” create role, Lambda function and deployment it …

Polices set up for Lambda function

  • Roles for Lambda function

Lambda function role created after Lambda function added and deployed into AWS.

Roles for Lambda function

Deployment

Make sure AWS environment has been set up, including access key, user, group, policies …

Pack and deploy Lambda example into AWS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ sls deploy -r ap-southeast-2 -s dev
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading service .zip file to S3 (583 B)...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
..................
Serverless: Stack update finished...
Service Informations
service: serverless-example
stage: dev
region: ap-southeast-2
api keys:
None
endpoints:
None
functions:
serverless-example-dev-hello
  • Lambda “hello” function

A “hello” Lambda function has been created in Lambda after it’s deployed into AWS by “serverless”.

Lambda "hello" function

  • Events generated during Lambda function deployment

Deployment events generated during Lambda “hello” function deployed into AWS.

Events generated during Lambda function deployment

  • Add Lambda Trigger on AWS API Gateway

Manually create a Lambda Trigger. This time we use AWS API Gateway to trigger / invoke Lambda “hello” function.

Lambda Trigger created on AWS API Gateway

  • Exposed Lambda API Gateway

After Lambda Trigger created, an exposed RESTful interface for Lambda “hello” function.

Lambda API Gateway

Say “hello”

Set up AWS API Gateway trigger for Lambda “hello” function. Go to url, e.g.:

Function “hello” log:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
{
"message": "Go Serverless v1.0! Your function executed successfully!",
"input": {
"resource": "/serverless-example-dev-hello",
"path": "/serverless-example-dev-hello",
"httpMethod": "GET",
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch, br",
"Accept-Language": "en-AU,en-GB;q=0.8,en-US;q=0.6,en;q=0.4",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "AU",
"Host": "b5dyhej16l.execute-api.ap-southeast-2.amazonaws.com",
"Referer": "https://ap-southeast-2.console.aws.amazon.com/lambda/home?region=ap-southeast-2",
"upgrade-insecure-requests": "1",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36",
"Via": "2.0 6884828476070d32978b45d03c1cc437.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "mvToMffe1AsUJNcMJKUh-Rx26oBJsRBe2n9I1df3xqIAIENPR_ku3A==",
"X-Amzn-Trace-Id": "Root=1-58aae2ff-0b0c5e4059cc97576211ba4a",
"X-Forwarded-For": "101.181.175.227, 54.239.202.65",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": null,
"pathParameters": null,
"stageVariables": null,
"requestContext": {
"accountId": "624388274630",
"resourceId": "5jbqsp",
"stage": "prod",
"requestId": "51ba2876-f769-11e6-b507-4b10c8a6886a",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "101.181.175.227",
"accessKey": null,
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36",
"user": null
},
"resourcePath": "/serverless-example-dev-hello",
"httpMethod": "GET",
"apiId": "b5dyhej16l"
},
"body": null,
"isBase64Encoded": false
}
}

References

Factorial function implementation in Java 8

Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package org.paradise.function;

import java.util.HashMap;
import java.util.Map;
import java.util.function.Function;

/**
* Created by terrence on 12/12/2016.
*/
public final class FactorialFunction {

public static final Map<Integer, Long> FACTORIAL_MAP = new HashMap<>();

public static final Function<Integer, Long> FACTORIAL = (x) ->
FACTORIAL_MAP.computeIfAbsent(x,
n -> n * FactorialFunction.FACTORIAL.apply(n - 1));

static {
FACTORIAL_MAP.put(1, 1L); // FACTORIAL(1)
}

private FactorialFunction() {

}

}

Unit test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package org.paradise.function;

import org.junit.Test;

import static org.junit.Assert.assertEquals;

/**
* Created by terrence on 12/12/2016.
*/
public class FactorialFunctionTest {

@Test
public void testFactorialFunction() throws Exception {

assertEquals("Incorrect result", Long.valueOf(1), FactorialFunction.FACTORIAL.apply(1));
assertEquals("Incorrect result", Long.valueOf(2), FactorialFunction.FACTORIAL.apply(2));

assertEquals("Incorrect result", Long.valueOf(3628800), FactorialFunction.FACTORIAL.apply(10));
}

}

Fibonacci function implementation in Java 8

Implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package org.paradise.function;

import java.util.HashMap;
import java.util.Map;
import java.util.function.Function;

/**
* Created by terrence on 12/12/2016.
*/
public final class FibonacciFunction {

public static final Map<Integer, Long> FIBONACCI_MAP = new HashMap<>();

public static final Function<Integer, Long> FIBONACCI = (x) ->
FIBONACCI_MAP.computeIfAbsent(x,
n -> FibonacciFunction.FIBONACCI.apply(n - 2) + FibonacciFunction.FIBONACCI.apply(n - 1));

static {
FIBONACCI_MAP.put(0, 0L); // FIBONACCI(0)
FIBONACCI_MAP.put(1, 1L); // FIBONACCI(1)
}

private FibonacciFunction() {

}

}

Unit test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package org.paradise.function;

import org.junit.Test;

import static org.junit.Assert.assertEquals;

/**
* Created by terrence on 12/12/2016.
*/
public class FibonacciFunctionTest {

@Test
public void testFibonacciFunction() throws Exception {

assertEquals("Incorrect result", Long.valueOf(0), FibonacciFunction.FIBONACCI.apply(0));
assertEquals("Incorrect result", Long.valueOf(1), FibonacciFunction.FIBONACCI.apply(1));
assertEquals("Incorrect result", Long.valueOf(1), FibonacciFunction.FIBONACCI.apply(2));
assertEquals("Incorrect result", Long.valueOf(2), FibonacciFunction.FIBONACCI.apply(3));
assertEquals("Incorrect result", Long.valueOf(3), FibonacciFunction.FIBONACCI.apply(4));
assertEquals("Incorrect result", Long.valueOf(5), FibonacciFunction.FIBONACCI.apply(5));
assertEquals("Incorrect result", Long.valueOf(8), FibonacciFunction.FIBONACCI.apply(6));

assertEquals("Incorrect result", Long.valueOf(13), FibonacciFunction.FIBONACCI.apply(7));
assertEquals("Incorrect result", Long.valueOf(21), FibonacciFunction.FIBONACCI.apply(8));
assertEquals("Incorrect result", Long.valueOf(34), FibonacciFunction.FIBONACCI.apply(9));
assertEquals("Incorrect result", Long.valueOf(55), FibonacciFunction.FIBONACCI.apply(10));

assertEquals("Incorrect result", Long.valueOf(12586269025L), FibonacciFunction.FIBONACCI.apply(50));
}

}