[
Occam’s razor[1]
Unlike
Alpha-Beta, classical Razoring[2]prunes branches forward, if the static evaluation of a move is less than or equal to
Alpha. It assumes that from any given position my opponent will be able to find at least one move that improves his position, the
Null Move Observation. There are various implementations of Razoring, somehow diminishing
pruning and
reductions near the
horizon. Razoring either prunes forward based on a reduced search, even the
quiescence search, or it reduces based on
evaluation or quiescence search as well.
There is even a more aggressive pruning technique at depth = 4 nodes, called Deep Razoring. The weaker assumption is my opponent will be able to improve his position with a yourmove-mymove-yourmove sequence.
Ernst A. Heinz introduced Limited Razoring, pushing the idea of
futility pruning one step further to pre-pre-frontier nodes (depth = 3), he combines it with the depth-reduction mechanism of deep razoring [4] .
/*
----------------------------------------------------------
| |
| now we toss in the "razoring" trick, which simply says |
| if we are doing fairly badly, we can reduce the depth |
| an additional ply, if there was nothing at the current |
| ply that caused an extension. |
| |
----------------------------------------------------------
*/if(depth<3*INCREMENT_PLY&&depth>=2*INCREMENT_PLY&&!tree->in_check[ply]&&extensions==-60){registerintvalue=-Evaluate(tree,ply+1,ChangeSide(wtm),-(beta+51),-(alpha-51));if(value+50<alpha)extensions-=60;}
A different razoring approach was proposed by
Robert Hyatt in
CCC[9] based on an idea by
Tord Romstad[10] , and abandoned in Crafty after cluster testing [11] . Dropping into the
quiescence search is intended inside a
PVS framework at strong expected
All-nodes near the tips (pre-pre-frontier nodes or below, depth <= 3), that is a
null window (alpha = beta - 1) is passed, and the static
evaluation is by a margin (~three pawns) below beta (or <= alpha). If under these conditions the quiescence search confirms the fail-low characteristics, its score is trusted and returned.
/*
************************************************************
* *
* now we try a quick Razoring test. If we are within 3 *
* plies of a tip, and the current eval is 3 pawns (or *
* more) below beta, then we just drop into a q-search *
* to try to get a quick cutoff without searching more in *
* a position where we are way down in material. *
* *
************************************************************
*/if(razoring_allowed&&depth<=razor_depth){if(alpha==beta-1){// null window ?
if(Evaluate(tree,ply,wtm,alpha,beta)+razor_margin<beta){// likely a fail-low node ?
value=QuiesceChecks(tree,alpha,beta,wtm,ply);if(value<beta)returnvalue;// fail soft
}}}
Similar code appears in
Jury Osipov’sopen source engineStrelka 2.0[12] , failing a bit harder. The interesting thing is the missing new_value < beta condition in the depth = 1 case. If the static
evaluation indicates a
fail-low node, but q-search
fails high, the score of the reduced fail-high search is returned, since there was obviously a winning capture raising the score, and one assumes a
quiet move near the horizon will not do better [13] .
Strelka.c line 393 (slightly modified for clarification)
user:GerdIsenberg
Classical Razoring is known for being risky. It likely was a progress in
ShannonType-B programs and a fixed width. Todays programs more likely rely on: