`curve.divideAtLength(length [, opt])`

Divide the curve into two curves at the point that lies `length`

away from the beginning of the curve.

Returns an array with two new curves without modifying the original curve. If negative `length`

is provided, the algorithm starts looking from the end of the curve. If `length`

is higher than curve length, the curve is divided at the closest endpoint instead.

The curve is first subdivided, according to `opt.precision`

(refer to `curve.length()`

documentation for more information about precision and curve flattening). Then, one subdivision is identified which contains the point at `length`

. A binary search is then performed on that subdivision, until a curve is found whose endpoint lies within `opt.precision`

away from `length`

. That endpoint is used by the function to divide the curve.

The default value for `opt.precision`

is 3; this corresponds to maximum observed error of 0.1%.

As a rule of thumb, increasing precision by 1 doubles the number of operations needed to find the point to be returned (this is on top of the cost of curve subdivision); exact numbers vary for every individual curve, however.

The `opt.subdivisions`

property may be specified, directly providing an array of pre-computed curve subdivisions from which to calculate curve length. Use the `curve.getSubdivisions()`

function to obtain an array of curve subdivisions. The `opt.precision`

property is still necessary, however; it determines the precision of the point search algorithm.