One simple way to get object size
1 | <dependency> |
consider code behind.
1 | public void A() throws InterruptedException { |
finally, the result of ‘number’ is not predictable, which might be 960, 977 or 999.
it keeps every thread get ‘global’ variable with consitent value. one more thing, ++number is not atomic, update number in multi-thread is not safe. add synchronized symbol will make it.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18public void A() throws InterruptedException {
for(int i = 0; i < 50; i ++) {
Thread thread = new Thread(new Runnable() {
public void run() {
for(int i = 0; i < 2; i++) {
synchronized (integerLock) {
++ number;
System.out.println(Thread.currentThread().getName() + " get " + number);
}
}
}
});
thread.start();
}
Thread.sleep(3000);
System.out.println("finally " + number);
}
so, synchronized can be a useable way in this occasion. compared to Optimistic Lock, synchronized cost too much, as it is a kind of Pessimistic Lock.
n. a volatile substance; a substance that changes readily from solid or liquid to a vapor, which means somthing can be changed at anytime.
As statement ‘lock’ will have bus or cache(cpu level) all to itself(core), which will influence multi-core performance. cas + volatile is a kind of Optimistic Lock, code follows.
1 | /** |
Conclusion, the method is reliable. but I am more interested in the parameters in this program. ‘Thread.sleep(100)’ statement may influence program performance, At least that’s what I believed when I wrote the program.
let’s test it. In my opinion, the first statement will make increaseAndGet faster and the second will make thread competition more fiercely.
when interval of retry time decrease, and competition between threads, repetition will occur, as value may have been revised by other threads.
consider this two statements:1
2int next = value + 1; // statement 1
if(innerCas(value, next)) { // statement 2
fix bug:1
2
3
4
5
6int before = value; // save value for thread-x
int next = before + 1; // save update value to thread-x
if(innerCas(before, next)) {
return next;
}else {
xxx
experiment result and analysis:
retry/wait | 50 | 100 | 200 | 500 | 1000 |
---|---|---|---|---|---|
50 | 479/7422 | 50/10253 | 22/20074 | 22/49760 | 18/99256 |
100 | 60/5609 | 496/14968 | 40/20349 | 13/49951 | 23/100053 |
200 | 33/5698 | 103/12056 | 379/27471 | 17/50160 | 28/100057 |
500 | 21/6054 | 47/12355 | 18/20849 | 495/74570 | 59/102554 |
1000 | 18/6810 | 18/11759 | 23/22253 | 107/60574 | 591/159076 |
when wait time is as short as possible, retry time influence conflict time significantly. the lower retry time is, the more times thread try to update ‘value’.
when wait time is long enough, the times become reverse related. retry and wait range from 50 to 1000 has no significant effect to total execution time, which is main related to wait time.
慢开始算法:当主机开始发送数据时,如果立即所大量数据字节注入到网络,那么就有可能引起网络拥塞,因为现在并不清楚网络的负荷情况。因此,较好的方法是 先探测一下,即由小到大逐渐增大发送窗口,也就是说,由小到大逐渐增大拥塞窗口数值。通常在刚刚开始发送报文段时,先把拥塞窗口 cwnd 设置为一个最大报文段MSS的数值。而在每收到一个对新的报文段的确认后,把拥塞窗口增加至多一个MSS的数值。用这样的方法逐步增大发送方的拥塞窗口 cwnd ,可以使分组注入到网络的速率更加合理。
每经过一个传输轮次,拥塞窗口 cwnd 就加倍。一个传输轮次所经历的时间其实就是往返时间RTT。不过“传输轮次”更加强调:把拥塞窗口cwnd所允许发送的报文段都连续发送出去,并收到了对已发送的最后一个字节的确认。
另,慢开始的“慢”并不是指cwnd的增长速率慢,而是指在TCP开始发送报文段时先设置cwnd=1,使得发送方在开始时只发送一个报文段(目的是试探一下网络的拥塞情况),然后再逐渐增大cwnd。
为了防止拥塞窗口cwnd增长过大引起网络拥塞,还需要设置一个慢开始门限ssthresh状态变量(如何设置ssthresh)。慢开始门限ssthresh的用法如下:
当 cwnd < ssthresh 时,使用上述的慢开始算法。
当 cwnd > ssthresh 时,停止使用慢开始算法而改用拥塞避免算法。
当 cwnd = ssthresh 时,既可使用慢开始算法,也可使用拥塞控制避免算法。
拥塞避免算法:让拥塞窗口cwnd缓慢地增大,即每经过一个往返时间RTT就把发送方的拥塞窗口cwnd加1,而不是加倍。这样拥塞窗口cwnd按线性规律缓慢增长,比慢开始算法的拥塞窗口增长速率缓慢得多。
与快重传配合使用的还有快恢复算法,其过程有以下两个要点:
<1>. 当发送方连续收到三个重复确认,就执行“乘法减小”算法,把慢开始门限ssthresh减半。这是为了预防网络发生拥塞。请注意:接下去不执行慢开始算法。1>
<2>. 由于发送方现在认为网络很可能没有发生拥塞,因此与慢开始不同之处是现在不执行慢开始算法(即拥塞窗口cwnd现在不设置为1),而是把cwnd值设置为 慢开始门限ssthresh减半后的数值,然后开始执行拥塞避免算法(“加法增大”),使拥塞窗口缓慢地线性增大。2>
###Substr finding algorithm : Sunday
1 | int sunday(char SArrary[], int iSLen, char TArrary[], int iTLen) |
###Permuation with Lua code
1 | function print_arr(dataArr, length) |
1 |
|
#Memory-layout-of-variable-in-C-program
2.Sample
void ppp(int parameter) {
return;
}
int main() {
ppp(12);
return 0;
}
REG BEGIN PUSH EBP MOV EBP, ESP
EBP 0036FD88 0036FD88 0036FD74 main in
ESP 0036FD78 0036FD74 0036FD74
REG BEGIN PUSH EBP MOV EBP, ESP
EBP 0036FD74 0036FD74 0036FC9C ppp in
ESP 0036FCA0 0036FC9C 0036FC9C
REG BEGIN MOV ESP, EBP POP EBP
EBP 0036FC9C 0036FC9C 0036FD74 ppp return
ESP 0036F9D0 0036FC9C 0036FCA0
REG BEGIN MOV ESP, EBP POP EBP
EBP 0036FD74 0036FD74 0036FD88 main return
ESP 0036FD74 0036FD74 0036FD78
Before the program enter main procudure, EBP is 0x0036FD88, so this is the Base Address of current Call Frame, ESP is bigger than EBP, I think all of us can undersand this in X86 Cpu architecture. Now, we enter main, I think I should save the Base Address, right now. As we know, Context should be saved and recovered when key operations begins and stops. So 0x0036FD88 is save to stack, then ESP is decreased to 0x0036FD74. And the same time, EBP is decreased to ESP (0x0036FD84).
####Link Reverse
1 | Node * reverse_link(Node *cur){ |
###LCS Problem
1 | int getLcs(string a, string b) { |
Generally, we use operator ++ to increase variable, especially in loop.
raw datatype situation
We can using ++i and i++ equally. Translating the two command into assembly language, firstly i++, a = i++;
1 |
|
Obviously, they cost time equally.
Now, we consider the other situation.
user-defined datatype situation
Considering class INTEGER,
1 | template <class T> |