1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
|
public class LinkedTransferQueue<E> extends AbstractQueue<E>
implements TransferQueue<E>, java.io.Serializable {
/*
* *** Overview of Dual Queues with Slack *** 双重队列的概述
*
* Dual Queues, introduced by Scherer and Scott
* (http://www.cs.rice.edu/~wns1/papers/2004-DISC-DDS.pdf) are
* (linked) queues in which nodes may represent either data or
* requests. When a thread tries to enqueue a data node, but
* encounters a request node, it instead "matches" and removes it;
* and vice versa for enqueuing requests. Blocking Dual Queues
* arrange that threads enqueuing unmatched requests block until
* other threads provide the match. Dual Synchronous Queues (see
* Scherer, Lea, & Scott
* http://www.cs.rochester.edu/u/scott/papers/2009_Scherer_CACM_SSQ.pdf)
* additionally arrange that threads enqueuing unmatched data also
* block. Dual Transfer Queues support all of these modes, as
* dictated by callers.
*
* A FIFO dual queue may be implemented using a variation of the
* Michael & Scott (M&S) lock-free queue algorithm
* (http://www.cs.rochester.edu/u/scott/papers/1996_PODC_queues.pdf).
* It maintains two pointer fields, "head", pointing to a
* (matched) node that in turn points to the first actual
* (unmatched) queue node (or null if empty); and "tail" that
* points to the last node on the queue (or again null if
* empty). For example, here is a possible queue with four data
* elements:
* 使用Michael & Scott(M & S)无锁队列算法的变体,可以实现FIFO双队列.
* (http://www.cs.rochester.edu/u/scott/papers/1996_PODC_queues.pdf).
* 它维护两个指针字段“head”,指向一个(匹配的)节点,该节点指向第一个实际(未匹配的)队列节点(如果空);
* 以及指向队列上最后一个节点的“tail”(如果是空的,则返回null)。例如,这里有一个带有四个数据元素的队列:
* head tail
* | |
* v v
* M -> U -> U -> U -> U
*
* The M&S queue algorithm is known to be prone to scalability and
* overhead limitations when maintaining (via CAS) these head and
* tail pointers. This has led to the development of
* contention-reducing variants such as elimination arrays (see
* Moir et al http://portal.acm.org/citation.cfm?id=1074013) and
* optimistic back pointers (see Ladan-Mozes & Shavit
* http://people.csail.mit.edu/edya/publications/OptimisticFIFOQueue-journal.pdf).
* However, the nature of dual queues enables a simpler tactic for
* improving M&S-style implementations when dual-ness is needed.
* 已知M&S队列算法在维护(通过CAS)这些头部和尾部指针时易于出现可扩展性和开销限制。
* 这导致了减少竞争的变体的开发,如消除数组和乐观的后退指针。然而,当双重队列需要时,
* 双重队列的本质使得更简单的策略可以改进M&S风格的实现。
*
*
* In a dual queue, each node must atomically maintain its match
* status. While there are other possible variants, we implement
* this here as: for a data-mode node, matching entails CASing an
* "item" field from a non-null data value to null upon match, and
* vice-versa for request nodes, CASing from null to a data
* value. (Note that the linearization properties of this style of
* queue are easy to verify -- elements are made available by
* linking, and unavailable by matching.) Compared to plain M&S
* queues, this property of dual queues requires one additional
* successful atomic operation per enq/deq pair. But it also
* enables lower cost variants of queue maintenance mechanics. (A
* variation of this idea applies even for non-dual queues that
* support deletion of interior elements, such as
* j.u.c.ConcurrentLinkedQueue.)
* 在双队列中,每个节点必须自动维护其匹配状态。虽然还有其他可能的变体,但我们在这里实现:对于数据模式节点,
* 匹配需要将非空数据值中的"item"字段填充为匹配时为空,反之亦然,请求节点null为数据值。
*(请注意,这种类型的队列的线性化属性很容易验证 - 元素通过链接提供,而不能通过匹配)。与普通的M&S队列相比,双队列的这个属性需要每个enq / deq对。
* 但是它也可以实现队列维护机制的更低成本的变体。 (即使对支持删除内部元素的非双重队列,如j.u.c.ConcurrentLinkedQueue也适用这种思路的变体。)
*
* Once a node is matched, its match status can never again
* change. We may thus arrange that the linked list of them
* contain a prefix of zero or more matched nodes, followed by a
* suffix of zero or more unmatched nodes. (Note that we allow
* both the prefix and suffix to be zero length, which in turn
* means that we do not use a dummy header.) If we were not
* concerned with either time or space efficiency, we could
* correctly perform enqueue and dequeue operations by traversing
* from a pointer to the initial node; CASing the item of the
* first unmatched node on match and CASing the next field of the
* trailing node on appends. (Plus some special-casing when
* initially empty). While this would be a terrible idea in
* itself, it does have the benefit of not requiring ANY atomic
* updates on head/tail fields.
* 一旦节点匹配,其匹配状态就不会再改变。因此,我们可以安排它们的链表包含零个或多个匹配节点的前缀,后跟零个或多个不匹配节点的后缀。
* (请注意,我们允许前缀和后缀为零长度,这意味着我们不使用虚拟标头。)如果我们不关心时间或空间的效率,我们可以正确执行入队和出队操作从指针遍历到初始节点;
* CASing 将匹配的第一个未匹配节点的item,并在追加的末尾处CASing下一个字段。。(加上一些特殊的外壳时,最初是空的)。虽然这本身就是一个可怕的想法,
* 但它的好处是不需要在头部/尾部字段上进行任何原子更新。
*
*
* We introduce here an approach that lies between the extremes of
* never versus always updating queue (head and tail) pointers.
* This offers a tradeoff between sometimes requiring extra
* traversal steps to locate the first and/or last unmatched
* nodes, versus the reduced overhead and contention of fewer
* updates to queue pointers. For example, a possible snapshot of
* a queue is:
* 我们在这里介绍一种位于从不与总是更新队列(头部和尾部)指针的极端之间的方法。
* 这提供了有时需要额外的遍历步骤来定位第一个和/或最后一个不匹配的节点与降低的开销和对队列指针的较少更新的争用之间的折衷。
* 例如,一个可能的队列快照是
*
* head tail
* | |
* v v
* M -> M -> U -> U -> U -> U
*
* The best value for this "slack" (the targeted maximum distance
* between the value of "head" and the first unmatched node, and
* similarly for "tail") is an empirical matter. We have found
* that using very small constants in the range of 1-3 work best
* over a range of platforms. Larger values introduce increasing
* costs of cache misses and risks of long traversal chains, while
* smaller values increase CAS contention and overhead.
* 对于这种"松弛"("头"的值和第一个不匹配的节点之间的目标最大距离,以及类似于"尾巴")的最佳值是经验问题。
* 我们发现使用1-3范围内的非常小的常数在一系列平台上工作得最好。较大的值会导致缓存未命中的增加成本以及长遍历链的风险,而较小的值会增加CAS竞争和开销。
*
* Dual queues with slack differ from plain M&S dual queues by
* virtue of only sometimes updating head or tail pointers when
* matching, appending, or even traversing nodes; in order to
* maintain a targeted slack. The idea of "sometimes" may be
* operationalized in several ways. The simplest is to use a
* per-operation counter incremented on each traversal step, and
* to try (via CAS) to update the associated queue pointer
* whenever the count exceeds a threshold. Another, that requires
* more overhead, is to use random number generators to update
* with a given probability per traversal step.
* 由于在匹配,追加或遍历节点时只有时更新头部或尾部指针,所以具有松弛的双重队列不同于普通的M&S双重队列;以维持目标松懈。
* "有时"的想法可以通过几种方式来实施。最简单的方法是在每个遍历步骤中使用逐个计数的计数器,并在计数超过阈值时尝试(通过CAS)更新关联的队列指针。
* 另一个需要更多开销的方法是使用随机数生成器,以每个遍历步骤的给定概率进行更新。
*
* In any strategy along these lines, because CASes updating
* fields may fail, the actual slack may exceed targeted
* slack. However, they may be retried at any time to maintain
* targets. Even when using very small slack values, this
* approach works well for dual queues because it allows all
* operations up to the point of matching or appending an item
* (hence potentially allowing progress by another thread) to be
* read-only, thus not introducing any further contention. As
* described below, we implement this by performing slack
* maintenance retries only after these points.
* 在沿着这些路线的任何策略中,因为CASes更新字段可能失败,所以实际的松弛可能超过目标松弛。
* 但是,他们可能会随时重试以维持目标。即使使用非常小的松弛值,这种方法也适用于双重队列,因为它允许所有的操作直到匹配或附加一个item(因此可能允许另一个线程的进度)
* 为只读的,因此不会引入任何进一步的争。如下所述,我们通过仅在这些点之后执行松弛维护重试来实现这一点。
*
* As an accompaniment to such techniques, traversal overhead can
* be further reduced without increasing contention of head
* pointer updates: Threads may sometimes shortcut the "next" link
* path from the current "head" node to be closer to the currently
* known first unmatched node, and similarly for tail. Again, this
* may be triggered with using thresholds or randomization.
* 作为这种技术的伴随,在不增加头指针更新争用的情况下,可以进一步减少遍历开销:线程有时可以使从当前"头"节点开始的"下一个"链接路径更接近当前已知的第一个不匹配节点,
* 类似的尾巴。再一次地,这可以通过使用阈值或随机化来触发。
*
* These ideas must be further extended to avoid unbounded amounts
* of costly-to-reclaim garbage caused by the sequential "next"
* links of nodes starting at old forgotten head nodes: As first
* described in detail by Boehm
* (http://portal.acm.org/citation.cfm?doid=503272.503282) if a GC
* delays noticing that any arbitrarily old node has become
* garbage, all newer dead nodes will also be unreclaimed.
* (Similar issues arise in non-GC environments.) To cope with
* this in our implementation, upon CASing to advance the head
* pointer, we set the "next" link of the previous head to point
* only to itself; thus limiting the length of connected dead lists.
* (We also take similar care to wipe out possibly garbage
* retaining values held in other Node fields.) However, doing so
* adds some further complexity to traversal: If any "next"
* pointer links to itself, it indicates that the current thread
* has lagged behind a head-update, and so the traversal must
* continue from the "head". Traversals trying to find the
* current tail starting from "tail" may also encounter
* self-links, in which case they also continue at "head".
*这些想法必须进一步扩展,以避免从旧的被遗忘的头节点开始的连续的"下一个"节点链接造成的成本高昂的回收垃圾的无限量:
* 如Boehm(http://portal.acm。 org / citation.cfm?doid = 503272.503282)如果GC延迟注意到任何旧的节点已经变成垃圾,所有更新的死节点也将被废弃。
* (在非GC环境中也会出现类似的问题)在我们的实现中,为了应对这种情况,在CASing提高头指针时,我们设置前一个头的"下一个"链接指向它自己;从而限制连接的死亡列表的长度。
* (我们也采取类似的措施来消除其他节点字段中可能存在的垃圾保留值)。但是,这样做会增加一些进一步的复杂性:如果任何"下一个"指针链接到自身,则表示当前线程已经落后
* 头部更新,所以遍历必须从"头部"继续。遍历试图从"尾巴"开始寻找当前的尾巴也可能遇到自连接,在这种情况下,它们也继续在"头部"。
*
* It is tempting in slack-based scheme to not even use CAS for
* updates (similarly to Ladan-Mozes & Shavit). However, this
* cannot be done for head updates under the above link-forgetting
* mechanics because an update may leave head at a detached node.
* And while direct writes are possible for tail updates, they
* increase the risk of long retraversals, and hence long garbage
* chains, which can be much more costly than is worthwhile
* considering that the cost difference of performing a CAS vs
* write is smaller when they are not triggered on each operation
* (especially considering that writes and CASes equally require
* additional GC bookkeeping ("write barriers") that are sometimes
* more costly than the writes themselves because of contention).
* 基于松弛的方案很诱人,甚至不使用CAS进行更新(类似于Ladan-Mozes&Shavit)。
* 但是,在上述链接遗忘机制下,这不能用于头部更新,因为更新可能会使头部处于分离节点。
* 尽管直接写入对于尾部更新是可能的,但是它们增加了长时间回缩的风险,并且因此增加了长的垃圾链,
* 考虑到执行CAS与写入的成本差异当不是更小时,可能成本更高(尤其是考虑到写入和CASes同样需要额外的GC簿记(“写入障碍”),
* 由于争用的原因,有时比写入本身更昂贵)。
*
* *** Overview of implementation ***
*
* We use a threshold-based approach to updates, with a slack
* threshold of two -- that is, we update head/tail when the
* current pointer appears to be two or more steps away from the
* first/last node. The slack value is hard-wired: a path greater
* than one is naturally implemented by checking equality of
* traversal pointers except when the list has only one element,
* in which case we keep slack threshold at one. Avoiding tracking
* explicit counts across method calls slightly simplifies an
* already-messy implementation. Using randomization would
* probably work better if there were a low-quality dirt-cheap
* per-thread one available, but even ThreadLocalRandom is too
* heavy for these purposes.
* 我们使用基于阈值的方法进行更新,其中松弛阈值为2--
* 也就是说,当当前指针显示为离第一个/最后一个节点的两个或更多的步骤时,我们将更新head / tail。
* 他的松弛值是硬连线的:大于1的路径自然是通过检查遍历指针的相等性来实现的,除非列表只有一个元素,在这种情况下,我们保持松弛阈值为1。
* 避免跟踪方法调用中的显式计数会略微简化已经很混乱的实现.
* 使用随机化可能会更好,如果有一个低质量的廉价的每线程可用,但即使ThreadLocalRandom太重这些目的。
*
* With such a small slack threshold value, it is not worthwhile
* to augment this with path short-circuiting (i.e., unsplicing
* interior nodes) except in the case of cancellation/removal (see
* below).
* 有了这样一个小的松弛门限值,除了取消/删除(见下文)的情况之外,不值得用路径短的路径(即,非拼接的内部节点)来增加这个值
*
* We allow both the head and tail fields to be null before any
* nodes are enqueued; initializing upon first append. This
* simplifies some other logic, as well as providing more
* efficient explicit control paths instead of letting JVMs insert
* implicit NullPointerExceptions when they are null. While not
* currently fully implemented, we also leave open the possibility
* of re-nulling these fields when empty (which is complicated to
* arrange, for little benefit.)
* 在任何节点入队之前,我们允许头字段和尾字段为空;初次追加。
* 这简化了一些其他的逻辑,并提供了更高效的显式控制路径,而不是让JVMs在其为空时插入隐式NullPointerExceptions 。
* 虽然目前还没有完全实施,但是我们也可以在空的时候重新确定这些领域的可能性(这很复杂,没有什么好处)。
*
* All enqueue/dequeue operations are handled by the single method
* "xfer" with parameters indicating whether to act as some form
* of offer, put, poll, take, or transfer (each possibly with
* timeout). The relative complexity of using one monolithic
* method outweighs the code bulk and maintenance problems of
* using separate methods for each case.
* 所有enqueue / dequeue操作都由单个方法“xfer”来处理,参数指示是否作为某种形式的offer、put、poll、take或transfer(每个可能有超时)。
* 使用单方法的相对复杂性超过了对每种情况使用单独方法的代码批量和维护问题。
*
* Operation consists of up to three phases. The first is
* implemented within method xfer, the second in tryAppend, and
* the third in method awaitMatch.
* 操作包括三个阶段。第一个在方法xfer中实现,第二个在tryAppend,第三个在方法awaitMatch。
*
* 1. Try to match an existing node 尝试匹配现有节点
*
* Starting at head, skip already-matched nodes until finding
* an unmatched node of opposite mode, if one exists, in which
* case matching it and returning, also if necessary updating
* head to one past the matched node (or the node itself if the
* list has no other unmatched nodes). If the CAS misses, then
* a loop retries advancing head by two steps until either
* success or the slack is at most two. By requiring that each
* attempt advances head by two (if applicable), we ensure that
* the slack does not grow without bound. Traversals also check
* if the initial head is now off-list, in which case they
* start at the new head.
* 从head开始,跳过已匹配的节点,直到找到与模式相反的未匹配节点,如果存在,在这种情况下,匹配它并返回,
* 如果需要更新头到一个经过匹配的节点(或者节点本身,如果列表具有没有其他不匹配的节点)。
* 如果CAS未命中,则循环重试前进两步,直到成功或松弛最多为两次。
* 通过要求每次尝试都提前两次(如果适用),我们确保松弛不会无限制地增长。遍历还会检查最初的头是否已经不在列表中,在这种情况下,他们从新的头开始。
*
* If no candidates are found and the call was untimed
* poll/offer, (argument "how" is NOW) return.
* 如果没有找到候选人,并且该调用是不计时的poll/offer,(参数"how"是NOW)返回。
*
* 2. Try to append a new node (method tryAppend) 尝试添加一个新节点(方法tryAppend)
*
* Starting at current tail pointer, find the actual last node
* and try to append a new node (or if head was null, establish
* the first node). Nodes can be appended only if their
* predecessors are either already matched or are of the same
* mode. If we detect otherwise, then a new node with opposite
* mode must have been appended during traversal, so we must
* restart at phase 1. The traversal and update steps are
* otherwise similar to phase 1: Retrying upon CAS misses and
* checking for staleness. In particular, if a self-link is
* encountered, then we can safely jump to a node on the list
* by continuing the traversal at current head.
* 从当前的tail指针开始,找到实际的最后一个节点,并尝试添加一个新节点(或者如果head为null,则建立第一个节点)。
* 如果节点已经匹配或处于相同的模式,则可以添加节点。如果我们检测到其他情况,则必须在遍历过程中追加一个具有相反模式的新节点,因此我们必须在第1阶段重新启动。
* 遍历和更新步骤与第1阶段类似:对CAS的重试失败和检查过时。特别地,如果遇到一个自链接,那么我们可以通过在当前head上继续遍历来安全地跳转到列表中的节点。
*
* On successful append, if the call was ASYNC, return.
* 在成功追加的上,如果调用是ASYNC,则返回。
*
* 3. Await match or cancellation (method awaitMatch) 等待匹配或取消(方法awaitMatch)
*
* Wait for another thread to match node; instead cancelling if
* the current thread was interrupted or the wait timed out. On
* multiprocessors, we use front-of-queue spinning: If a node
* appears to be the first unmatched node in the queue, it
* spins a bit before blocking. In either case, before blocking
* it tries to unsplice any nodes between the current "head"
* and the first unmatched node.
* 等待另一个线程匹配节点;如果当前线程被中断或等待超时,则取消。在多处理器上,我们使用front-of-queue旋转:
* 如果一个节点看起来是队列中第一个不匹配的节点,它将在阻塞之前旋转一点。在任何一种情况下,在阻塞之前,它都会试图在当前的"头"和第一个不匹配的节点之间的任何节点上解开。
*
* Front-of-queue spinning vastly improves performance of
* heavily contended queues. And so long as it is relatively
* brief and "quiet", spinning does not much impact performance
* of less-contended queues. During spins threads check their
* interrupt status and generate a thread-local random number
* to decide to occasionally perform a Thread.yield. While
* yield has underdefined specs, we assume that might it help,
* and will not hurt in limiting impact of spinning on busy
* systems. We also use smaller (1/2) spins for nodes that are
* not known to be front but whose predecessors have not
* blocked -- these "chained" spins avoid artifacts of
* front-of-queue rules which otherwise lead to alternating
* nodes spinning vs blocking. Further, front threads that
* represent phase changes (from data to request node or vice
* versa) compared to their predecessors receive additional
* chained spins, reflecting longer paths typically required to
* unblock threads during phase changes.
* Front-of-queue队列旋转极大地提高了大量争用队列的性能。只要它比较简短和"quiet",旋转对排队较少的队列的性能影响不大。
* 在旋转线程中检查他们的中断状态并生成一个thread-local随机数来决定偶尔执行一个Thread.yield。
* 虽然yield 有未定义的规格,但我们认为它可能会有所帮助,而且不会影响在繁忙的系统中旋转的影响。
* 我们也使用较小的(1/2)的自旋为节点,这些节点不知道是前面的,但其前身没有被阻塞——这些“链”自旋避免了front-of-queue规则的工件,否则会导致交错的节点旋转vs阻塞。
* 此外,与他们的前辈相比,代表阶段变化的前端线程(从数据到请求节点,或者反之亦然),会得到额外的链接自旋,这反映了在阶段变化过程中通常需要打开线程的更长的路径。
* 此外,表示相位变化(从数据到请求节点或反之亦然)的前线程相比于其前身接收额外的链式自旋,反映了通常在相位变化期间解锁线程所需的较长路径。
*
* ** Unlinking removed interior nodes ** 链接删除内部节点
*
* In addition to minimizing garbage retention via self-linking
* described above, we also unlink removed interior nodes. These
* may arise due to timed out or interrupted waits, or calls to
* remove(x) or Iterator.remove. Normally, given a node that was
* at one time known to be the predecessor of some node s that is
* to be removed, we can unsplice s by CASing the next field of
* its predecessor if it still points to s (otherwise s must
* already have been removed or is now offlist). But there are two
* situations in which we cannot guarantee to make node s
* unreachable in this way: (1) If s is the trailing node of list
* (i.e., with null next), then it is pinned as the target node
* for appends, so can only be removed later after other nodes are
* appended. (2) We cannot necessarily unlink s given a
* predecessor node that is matched (including the case of being
* cancelled): the predecessor may already be unspliced, in which
* case some previous reachable node may still point to s.
* (For further explanation see Herlihy & Shavit "The Art of
* Multiprocessor Programming" chapter 9). Although, in both
* cases, we can rule out the need for further action if either s
* or its predecessor are (or can be made to be) at, or fall off
* from, the head of list.
*
* 除了通过上述自链接最小化垃圾留存之外,我们还取消了删除的内部节点的链接。这些可能是由于超时或中断等待,或调用删除(x)或Iterator.remove而引起的。
* 通常情况下,如果一个节点曾经被认为是要删除的某个节点的前身,那么如果它的前一个字段仍然指向s,我们就可以通过加入它的下一个字段来取消修改
* (否则s必须已经被删除删除或现在是offlist)。
* 但有两种情况我们不能保证以这种方式使节点不可达:
* (1)如果s是列表的尾节点(即下一个为空),那么它被固定为附加的目标节点,所以只能在其他节点附加后才能删除。
* (2)给定一个匹配的前驱节点(包 ),我们不一定解除链接:前驱可能已经是未被剪接的,在这种情况下,前一个可到达节点可能仍然指向s。
*
* (进一步的解释见Herlihy&Shavit"多处理器编程的艺术"第9章)。虽然在这两种情况下,如果任何一方或其前任(或可以被制造)处于或脱离名单的首位,
* 我们可以排除进一步采取行动的必要性。
*
* Without taking these into account, it would be possible for an
* unbounded number of supposedly removed nodes to remain
* reachable. Situations leading to such buildup are uncommon but
* can occur in practice; for example when a series of short timed
* calls to poll repeatedly time out but never otherwise fall off
* the list because of an untimed call to take at the front of the
* queue.
*
* 如果不考虑这些因素,那么可能会有无数的假定被删除的节点保持可达状态。导致这种积聚的情况是不常见的,但可以在实践中发生;
* 例如,当一系列短时间调用重复轮询时间超时,但是由于在队列前面进行不计时的调用,从不会以其他方式脱离列表
*
* When these cases arise, rather than always retraversing the
* entire list to find an actual predecessor to unlink (which
* won't help for case (1) anyway), we record a conservative
* estimate of possible unsplice failures (in "sweepVotes").
* We trigger a full sweep when the estimate exceeds a threshold
* ("SWEEP_THRESHOLD") indicating the maximum number of estimated
* removal failures to tolerate before sweeping through, unlinking
* cancelled nodes that were not unlinked upon initial removal.
* We perform sweeps by the thread hitting threshold (rather than
* background threads or by spreading work to other threads)
* because in the main contexts in which removal occurs, the
* caller is already timed-out, cancelled, or performing a
* potentially O(n) operation (e.g. remove(x)), none of which are
* time-critical enough to warrant the overhead that alternatives
* would impose on other threads.
*
* 当出现这些情况时,我们不是总是回顾整个列表以找到一个实际的前任解除联系(对于情况(1)无助于此),
* 我们记录保守估计的可能的失败失败(在"sweepVotes"中)。当估计值超过阈值("SWEEP_THRESHOLD")时,
* 我们会触发完全扫描,指示在扫描之前允许的最大移除失败次数的最大值,取消初始移除时取消关联的取消节点。
* 我们通过线程触发阈值(而不是后台线程或将工作分散到其他线程)执行扫描,因为在发生删除的主要上下文中,
* 调用者已经超时,取消或执行潜在的O(n)操作(例如remove(x)),其中没有一个对时间要求太高,不足以保证备选方案会对其他线程施加的开销。
*
* Because the sweepVotes estimate is conservative, and because
* nodes become unlinked "naturally" as they fall off the head of
* the queue, and because we allow votes to accumulate even while
* sweeps are in progress, there are typically significantly fewer
* such nodes than estimated. Choice of a threshold value
* balances the likelihood of wasted effort and contention, versus
* providing a worst-case bound on retention of interior nodes in
* quiescent queues. The value defined below was chosen
* empirically to balance these under various timeout scenarios.
* 由于清除投票的估计是保守的,并且由于节点脱离队列头部而"自然地"断开连接,并且因为即使在扫描正在进行时我们也允许投票累计,所以通常显着少于估计的节点。
* 阈值的选择平衡了浪费的努力和争用的可能性,而不是在静止队列中保留内部节点的最坏情况。下面定义的值是根据经验选择的,以在各种超时情况下平衡这些值。
*
*
* Note that we cannot self-link unlinked interior nodes during
* sweeps. However, the associated garbage chains terminate when
* some successor ultimately falls off the head of the list and is
* self-linked.
* 请注意,在扫描期间,我们不能自行链接未连接的内部节点。然而,当一些后继者最终脱离列表头并且是自联系的时,相关的垃圾链终止。
*/
private static final long serialVersionUID = -3223113410248163686L;
/** True if on multiprocessor 如果在多处理器上,则为真*/
private static final boolean MP =
Runtime.getRuntime().availableProcessors() > 1;
/**
* The number of times to spin (with randomly interspersed calls
* to Thread.yield) on multiprocessor before blocking when a node
* is apparently the first waiter in the queue. See above for
* explanation. Must be a power of two. The value is empirically
* derived -- it works pretty well across a variety of processors,
* numbers of CPUs, and OSes.
* 当一个节点显然是队列中的第一个waiter时,在阻塞之前对多处理器进行旋转的次数(对Thread.yield进行随机散布的调用)。请参阅上面的解释。必须是2的幂。
* 这个值是凭经验推导出来的 - 在各种处理器,CPU数量和操作系统上都能很好地工作。
*
* 自旋次数
*/
private static final int FRONT_SPINS = 1 << 7;
/**
* The number of times to spin before blocking when a node is
* preceded by another node that is apparently spinning. Also
* serves as an increment to FRONT_SPINS on phase changes, and as
* base average frequency for yielding during spins. Must be a
* power of two.
* 当一个节点位于另一个明显旋转的节点的前面时,在阻塞之前旋转的次数。也用作相位变化的FRONT_SPINS增量,以及旋转期间的屈服平均频率。必须是2的幂。
*/
private static final int CHAINED_SPINS = FRONT_SPINS >>> 1;
/**
* The maximum number of estimated removal failures (sweepVotes)
* to tolerate before sweeping through the queue unlinking
* cancelled nodes that were not unlinked upon initial
* removal. See above for explanation. The value must be at least
* two to avoid useless sweeps when removing trailing nodes.
* 在清除整个队列之前,可以容忍的最大清除失败次数(sweepVote)取消链接已取消的节点,这些节点在初始删除时并未解除链接。
* 请参阅上面的解释。删除尾随节点时,该值必须至少为2以避免无用的扫描。
*/
static final int SWEEP_THRESHOLD = 32;
/** head of the queue; null until first enqueue */
//头节点
transient volatile Node head;
/** tail of the queue; null until first append */
//尾节点
private transient volatile Node tail;
/** The number of apparent failures to unsplice removed nodes */
//清除删除节点的明显故障数
private transient volatile int sweepVotes;
// CAS methods for fields
private boolean casTail(Node cmp, Node val) {
return UNSAFE.compareAndSwapObject(this, tailOffset, cmp, val);
}
private boolean casHead(Node cmp, Node val) {
return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val);
}
private boolean casSweepVotes(int cmp, int val) {
return UNSAFE.compareAndSwapInt(this, sweepVotesOffset, cmp, val);
}
/**
* Creates an initially empty {@code LinkedTransferQueue}.
*/
public LinkedTransferQueue() {
}
/**
* Creates a {@code LinkedTransferQueue}
* initially containing the elements of the given collection,
* added in traversal order of the collection's iterator.
*
* @param c the collection of elements to initially contain
* @throws NullPointerException if the specified collection or any
* of its elements are null
*/
public LinkedTransferQueue(Collection<? extends E> c) {
this();
addAll(c);
}
// Unsafe mechanics
private static final sun.misc.Unsafe UNSAFE;
private static final long headOffset;
private static final long tailOffset;
private static final long sweepVotesOffset;
static {
try {
UNSAFE = sun.misc.Unsafe.getUnsafe();
Class k = LinkedTransferQueue.class;
headOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("head"));
tailOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("tail"));
sweepVotesOffset = UNSAFE.objectFieldOffset
(k.getDeclaredField("sweepVotes"));
} catch (Exception e) {
throw new Error(e);
}
}
}
|