您的位置:首页 > 移动开发 > IOS开发

iOS NEON 介绍

2011-07-04 17:15 651 查看
最近在iOS做一些移植工作中,总体来说,C 代码 + ARM asm 的工程还是很容易移植到iOS上,改动很少的,ARM asm 文件格式要改一下。在网上看到这篇文章很不错,就转载一下,也供今后自己的知识回顾。

转自:http://wanderingcoder.net/2010/06/02/intro-neon/

A sometimes overlooked addition to the iPhone platform that debuted with the iPhone 3GS is the presence of an SIMD engine called NEON. Just like AltiVec for PowerPC and MMX/SSE for x86, this allows multiple computations to be performed at once on ARM, giving an important speedup to some algorithms, on condition that the developer specifically codes for it.

Given that, among the iPhone OS devices, only the iPhone 3GS and the third-gen iPod Touch featured NEON up until recently, it typically wasn’t worth the effort to work on a NEON optimization since it would benefit only these devices, unless the application could require running on one of them (e.g. because its purpose is to processes videos recorded by the iPhone 3GS). However, with the arrival of the iPad it makes much more sense to develop NEON optimizations. In this post I’ll try to give you a primer on NEON.

Before we begin, a bit of setup

This instruction set extension is actually called Advanced SIMD, it is an optional extension to the ARMv7-A profile; however, when ARMv7 is mentioned the presence of the Advanced SIMD extensions is generally implied, unless mentioned otherwise. “NEON” actually refers to the implementation of this extension in ARM processors, but this name is commonly used to designate these SIMD instructions as well as the processor feature.

So far, Apple has only shipped one processor core featuring NEON: the Cortex A8, in three devices: the iPhone 3GS, the third-gen iPod Touch, and the iPad. This means that besides the general details of NEON that are valid whichever the implementation, the details of NEON as implemented in the Cortex A8 are of high interest to us since it’s the only processor core which will run iPhone OS NEON-optimized code at the moment.

Do You Need It?

iPhone OS devices (even the iPad) are pretty focused. The user is unlikely to want to perform scientific calculations, simulation, offline rendering, etc. on these devices, so the main applications for NEON here are multimedia and gaming. Then, see if the work hasn’t already been done for you in the Accelerate framework, new in iPhone OS 4.0, for you to use when the update lands. However, iPhone OS 4.0 won’t make it to the iPad until this fall, and of course if the algorithm you need isn’t in Accelerate, it’s not going to be of any help to you; in these cases, then it makes sense to write your own NEON code.

As always when optimizing, make sure you don’t optimize the wrong thing. When doing multimedia work, hunches about what is taking up processor time are less likely to be wrong than for general applicative work, but you should still first write the regular C version of the algorithm, then profile your code using Shark, in order to know the one part which is taking up the most time, and consequently is to be tackled first (and then the second most, etc.). Then benchmark your improvements to make sure you did improve things (It’s unlikely for the NEON version to be slower than scalar code, but regressions can happen when trying to improve the NEON code).

Detection and Selection

Obviously, devices earlier than the iPhone 3GS are not going to be able to run NEON code, so there needs to be a way to select a different code path depending on which device is executing the code; even if you only target, say, the iPad, there needs to be a way to let it be know the code requires NEON, so that it does not accidentally run on a non-NEON device.

This way is to build the application for both ARMv6 and ARMv7 (Optimized setting in Xcode), and, using #ifdefs, disable at compile time the undesired code (the NEON code when building for ARMv6, the non-NEON fallback when building for ARMv7; __ARM_NEON__ will be defined if NEON is available); if you target only ARMv7 devices, then only build for ARMv7 (in that case, don’t forget to add armv7 in the UIRequiredDeviceCapabilities of your Info.plist). When your application is run, the device will automatically pick the most capable half it can: the ARMv7/NEON one for ARMv7-capable devices, the ARMv6 half, if present, otherwise. The drawback is that, unless you target only ARMv7 devices, the whole application code size (not just the code for which there is a NEON version) will end up twice in the final distribution: once compiled for ARMv6, and once for ARMv7. As executable code is typically a small portion of the application size, this is not a problem for most applications, but it is a problem for some of them.1

Don’t forget that you’ll need to disable the NEON code at compile time when building for the simulator, as your application is compiled for x86 when targeting the simulator, and NEON code will cause build errors in this context. This means you always need to also write a generic C version of the algorithm, even if you only target the iPad, or you won’t be able to run your application in the simulator.

Development

There are actually different ways you can create NEON code. You can program directly in assembly language; it does require good knowledge of ARM assembly programming, so it’s not for everyone, but NEON programming isn’t for everyone, either. You can also remain in C and use compiler intrinsics (which you get by including arm_neon.h), leaving the compiler to worry about register allocation, assembly language details, etc.

ARM mentions a third way, which is to let the compiler auto-vectorize, but I don’t know how well it works, or if it is even enabled in the iPhone toolchain, so I’m not going to reference it here.

I’ve only used assembly programming so far, so I will generally describe techniques under assembly terms, but everything should be equally applicable when programming with intrinsics, just with a different syntax. There is one important thing, however, which is applicable only when you program in assembly (you don’t have to worry about that when using intrinsics): make sure you save and restore d8 to d15 if you use them in your function, as the ABI specifies that these registers must be preserved by function calls. If you don’t, it may work fine for a while, until you realize with horror that the floating-point variables in the calling functions become corrupted, and all hell breaks loose. So make sure these registers are saved and restored if they are used.

Architecture Overview

Now we get to the meat of the matter. To follow from home you will need the architecture document from ARM describing these instructions; fortunately, you already have it. You see, Shark has this feature where you can get help on a particular assembler instruction, which it provides simply by opening the architecture specification document at the page for this instruction. While for PowerPC and Intel, the document bundled with Shark is a simple list of all the instructions, for ARM it is actually a fairly complete subset of the ARM Architecture Reference Manual (some obsolete information is omitted). Rather than open it through Shark, you can open it directly in Preview by finding the helper application (normally at /Library/Application Support/Shark/Helpers/ARM Help.app), right clicking->Show Package Contents, and locating the PDF in the resources.

I will sometimes reference AltiVec and/or SSE, as these are the SIMD instruction set extensions that iPhone developers are most likely to be familiar with, and there is already an important body of information for both architectures online; less so for NEON.

Programmer Model

As you know, SIMD (Single Instruction Multiple Data) architectures apply the same operation (multiplication, shift, etc.) in parallel to a number of elements; the range of elements to which an operation is applied is called a vector. In modern SIMD architectures, vectors have a constant size in number of bits, and contain a different number of elements depending on the element size being operated on: for instance, in both AltiVec and SSE vectors are 128 bits, if the operation is on 8-bit elements for instance, then it operates on 16 elements in parallel (128/8); if the operation is on 16-bit elements, then each vector contains 8 of them (128/16).

In NEON you have access to sixteen 128-bit vector registers, named q0 to q15; this is the same vector size as AltiVec and SSE, and there are twice as many registers as 32-bit SSE, but half as many as AltiVec. However, this register file of 16 Q (for Quadword) vectors can also be seen as thirty-two 64-bit D (Doubleword) vectors, named d0 to d31: each D register is one half, either the low or high one, of a Q register, and conversely each Q register is made up of two D registers; for instance, d12 is the low half of q6, and q3 is at the same location as the d6-d7 pair. Most instructions that do not change the element size between input and output can operate on either D or Q registers; instructions that narrow elements as part of their operation (e.g. a 16-bit element on input becomes a 8-bit element on output) take Q vectors as input and have a D vector as output; instructions that enlarge elements as part of their operation take D vectors as input and have a Q vector as output. There are, however, a few instructions that only work on D vectors: the vector loads, vector stores, vpadd, vpmax, vpmin, and vtbl/vtbx; while for some operations this matters very little (e.g. to load two Q vectors, you use a load multiple instruction with a list containing the four matching D vectors), in the other cases this means the operation must be done in two steps to operate on a Q vector, once on the lower D half and once one the higher D half.

This D/Q duality provides a consistent way to handle narrowing and widening operations, instead of the somewhat ad hoc schemes used by AltiVec and SSE (e.g. for full precision multiplies on AltiVec you multiply the even elements, then the odd elements of a vector; on SSE you obtain the low part of the multiplication, then the high part). It also makes it easier to manage the narrowed vector prior to an operation that uses one, or after an operation that produces one. It does make it a bit tricky to ensure that you don’t overwrite data you want to keep, as you must remember not to use q10 if you want to keep the data in d20.

The register file is shared with the floating-point unit, but NEON and floating-point instructions can be freely intermixed (provided, of course, that you share the registers correctly), contrary to MMX/x87.

A reminder: the ARM architecture (at least as used in the iPhone) is little-endian; remember that when permuting or otherwise manipulating the vector elements.

Architectural features

NEON instructions typically have two input and one separate output register, so calculations are generally non-destructive. There are three-operand instructions like multiply-add, but in that case there is no separate output register, in the case of multiply-add for instance the addition input register is also the output. A bit less usual is the fact some instructions, like vzip, have two output registers.

Some instructions take a list of registers. It must be a list of consecutive D registers, though in some cases there can be a delta of two between registers (e.g. {d13, d15, d17}), in order to support data in Q registers.

Some instructions can take a scalar as one of their inputs. In that case, the scalar is used for all operations instead of having the corresponding elements of the two vectors be matched. For instance, if the vector containing a, b, c, d is multiplied by the scalar k, then the result is k×a, k×d, k×c, k×d. Scalars can also be duplicated to all elements of a vector, in preparation for operations that do not support scalar operands. Scalars are specified by the syntax dn[x], where x is the index of the scalar in the dn vector.

Syntax

All NEON instructions, even loads and stores, begin by “V”, which makes them easy to tell apart, and easy to locate in the architecture documentation. Instructions can have one (or more) letter just after the V which acts as a modifier: for instance “Q” means the instruction saturates, “R” that it rounds, and “H” that it halves. Not all combinations are possible (far from it), but this gives you an indication when reading the instruction of what it does.

Practically all instructions need to take a suffix telling the individual size and type of the elements being operated on, from .u8 (unsigned byte) to .f32 (single-precision floating-point): for instance, vqadd.s16. If the element size changes as part of the operation, the prefix indicates the element size of the narrowest input. Some instructions only need the element type to be partially specified, for instance vadd can operate on .i16, as it only needs to know that the elements are integers (and their sizes), not whether they are signed or unsigned; some instructions even only need the element size (e.g. .32). However, always use the most specific data type you can, for instance if you’re currently operating on u32 data, then specify .u32 even for instructions that would just as well accept .32: the assembler will accept it and it will make your code clearer and easier to type-check.

A little historical note: NEON instructions used to have a different syntax, and some of them changed names. Notably, instructions where the element size changed took two prefixes, with the size after and before (e.g. .u32.u16). This is something you may still see in disassembly, for instance. And, for some reason, the iPhone SDK assembler only accepts instruction mnemonics in lowercase, so while the instructions are uppercase in the documentation, be sure to write them in lowercase.

Instruction Overview

It is way out of the scope of this blog post to provide a full breakdown of the NEON capabilities (like, for instance, http://developer.apple.com/hardwaredrivers/ve/sse.html does for SSE). I will just give you a quick rundown of each major area to get you started, after that the documentation should be enough.

Load and Stores
NEON of course has a vector load and a vector store instructions, and even has vector load/store multiple instructions. However, these instructions are typically only used for saving/restoring registers on the stack and loading constants, both places where you can easily guarantee alignment, as these instructions demand word alignment. To load and store the data from the streams you will be operating on, you will typically use vld1/vst1; these instructions handle unaligned access, and the element size they take in fact pretty much acts as an alignment hint: they will expect the address to be aligned to a multiple to the element size.

Much more interesting are the vld#/vst# instructions. These instructions allow you to deinterlace data when loading, and reinterlace it when storing; for instance if you have an array of floating-point xyz structures, then with vld3.f32 you will have a few x data neatly loaded into one vector register, with another vector register containing the y and a third the z. Even for two-element (e.g. stereo audio) or four-element (e.g. 32-bit RGBA pixels) interlaced data, it avoids you the temptation to operate on non-uniform data, instead everything is neatly segregated in its own register (one register holds all left, one register holds all alpha, etc.). Notice these have the option to operate on non-consecutive registers of the form {<Dd>, <Dd+2>, <Dd+4>, <Dd+6>}, so that you can load/store Q registers using two of these instructions (one filling the low D halves, one the high D halves).

These instructions can also load one data or one structure to a particular element of a register, so scatter loading (not that it should be abused) is even easier than with SSE; you can also load the same data to all elements of the register directly.

In NEON you can’t really do software realignment, as just like SSE there is no real support for this (vext looks tempting, until you realize the amount is an immediate, fixed at compile time). By starting with a few scalar iterations, you may be able to align the output stream to a multiple of the vector size; however the other streams are typically not aligned to a vector boundary at this point, so use unaligned accesses for everything else.

Permutation
NEON does have a real permute with vtbl/vtbx, however it doesn’t come cheap. Loading a Q vector with permuted data from two Q vectors, which is the equivalent of an AltiVec vperm instruction, will require issuing two instructions which will take 6 cycles in total on a Cortex A8, so save this for when it’s really worth it.

For permutations known at compile time, you should be able to combine the various permutation instructions to do your bidding: vext, vzip/vuzp, vtrn and vrev; vswp can be considered a permutation instruction too, it can serve as the equivalent of vtrn.64 and .128 (which don’t exist) for instance. vzip in particular acts a bit like AltiVec merge, though the fact it overwrites its inputs makes it slightly unwieldy. Don’t forget you can use the structured load/store instructions to get the data in the right place right as you load it, instead of permuting it afterwards.

Comparisons and Boolean operators
There are a variety of comparison instructions, including ones which compare directly to 0; as is customary, if the comparison is true, the corresponding element is set to all ones, otherwise to all zeroes. There is the usual array of boolean operators to manipulate these masks, plus some less usual ones such as XOR, AND with one operand complemented, and OR with one operand complemented. Oh, and there is a select instruction (in fact, three, depending on which input you want to overwrite) to make good use of these masks.

Floating-Point
NEON floating-point capabilities are very similar to AltiVec. To wit:

just like AltiVec, only single-precision floating-point numbers are available

by default, denormals are flushed to 0 and results are rounded to nearest

only estimates to reciprocal square root and reciprocal are given, refinement steps are necessary to get the IEEE correct result

there are single-instruction multiply-add and multiply-substract

This is not to say there is no difference, however:

contrary to AltiVec, multiply-add seems to be UNfused, there is apparently an intermediate rounding step

probably related to the previous point, refinement step instructions for reciprocal square root and reciprocal are provided.

denormal handling cannot be turned on

conversion to integer necessarily rounds towards 0

there are no log and 2x estimates

some horizontal NEON operations are available, as well as absolute differences.

Integer
I’d qualify the integer portions of NEON as very nimble. For instance, you can right shift at the same time as you narrow, allowing you to extract any subpart of the wider element, and not only that, but you can round and saturate at the same time as you extract, including unsigned saturation from signed integer; very useful for extracting the result from an intermediate fixed-point format. The other way round, you can shift left and widen to get to that intermediate format. Simple left shifts can also saturate; without this, extracting bitfields with saturation is really unwieldy. There are also shift right and insert which allow to efficiently pack bit formats, for instance.

The multiplications are okay, I guess, though vq(r)dmulh is pretty much your only choice for a multiplication that does not widen and is useful for fixed-point computations, so better learn to love it.

Miscellaneous
Though not part of NEON, I should mention the pld (preload) instruction, which has been here since ARMv5TE, as such memory hint/prefetch instructions are often closely associated with SIMD engines. Architecturally, the effect of the pld instruction is not specified, the action performed depends on the processor. On the Cortex A8, this instruction causes the cache line containing the address to be loaded in the L2 cache; on the Cortex A8 the NEON unit directly loads from the L2 cache. If you do use pld, make sure to benchmark the performance before and after, as it can slow down things if used incorrectly.

In general, the documentation from ARM is of good quality, however there are not many figures to explain the operation of an instruction, so you should be ready to read accurate but verbose pseudocode if you’re unsure of the operation of an instruction or need to check it does what you think it does.

Cortex A8 implementation of NEON

The Cortex A8 implements NEON and VFP as a coprocessor “located” downstream from the ARM core: that is, NEON and VFP instructions go through the ARM pipeline undisturbed, before being decoded again by the NEON unit and going through its pipeline. This has a few consequences, the main one being that, while moving a value from an ARM register to a NEON/VFP register is fast, moving a value from a NEON/VFP register to an ARM register is very slow, causing a 20 cycle pipeline stall.

On the Cortex A8, most NEON instructions execute with a single cycle throughput; however, latencies are typically 2 or more, so directly using the result of the previous instruction will cause a stall; try to alternate operation on two parallel data to maximize throughput. Some NEON instructions that operate on Q vectors will execute in two cycles while they take only one when operating on D vectors, as if they were split in two instructions that operate on the two D halves (and in fact this is probably what happens); not much you can do about it, just something to know (not really unusual, remember that up until the Core 2 duo, Intel processors could only execute SSE instruction by breaking them in two since key internal data paths were only 64-bit wide, so all 128-bit instructions took two cycles). However, vzip and vuzp on Q vectors actually take 3 cycles instead of 1 for D vectors, since when operating on Q vectors the operation can not be reduced to two operations of D vectors.

The Cortex A8 has a second NEON pipeline for load/store and permute instructions, so these instructions can be issued in parallel with non-permute instructions (provided there is no dependency issue). Remember the Cortex A8 (and its NEON unit) is an in-order core, so it is not going to go fetch farther instructions in the instruction stream to extract parallelism: only instructions next to each other can be issued in parallel. Notice that duplicating a scalar is considered a permute instruction, so provided you do so a bit before the result is needed, the operation is pretty much free.

One last consideration from a recent ARM blog post is that you shouldn’t use “regular” ARM code to handle the first and last scalar iterations, as there is a penalty when writing to the same area of memory from both NEON and ARM code; even scalar iterations should be done with NEON code (which should be easy with single element loads and stores).

Oh, you mean I have to conclude?

My impression of NEON as a whole so far is that it seems a capable SIMD/multimedia architecture, and stands the comparison with AltiVec and SSE. It doesn’t have some things like sum of absolute differences, and there are probably some missing features I haven’t noticed yet, so it still has to grow a bit to reach parity, but it is already very useful.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: