Introduce zeromem_dczva function on AArch64 that can handle unaligned
addresses and make use of DC ZVA instruction to zero a whole block at a
time. This zeroing takes place directly in the cache to speed it up
without doing external memory access.
Remove the zeromem16 function on AArch64 and replace it with an alias to
zeromem. This zeromem16 function is now deprecated.
Remove the 16-bytes alignment constraint on __BSS_START__ in
firmware-design.md as it is now not mandatory anymore (it used to comply
with zeromem16 requirements).
Change the 16-bytes alignment constraints in SP min's linker script to a
8-bytes alignment constraint as the AArch32 zeromem implementation is now
more efficient on 8-bytes aligned addresses.
Introduce zero_normalmem and zeromem helpers in platform agnostic header
that are implemented this way:
* AArch32:
* zero_normalmem: zero using usual data access
* zeromem: alias for zero_normalmem
* AArch64:
* zero_normalmem: zero normal memory using DC ZVA instruction
(needs MMU enabled)
* zeromem: zero using usual data access
Usage guidelines: in most cases, zero_normalmem should be preferred.
There are 2 scenarios where zeromem (or memset) must be used instead:
* Code that must run with MMU disabled (which means all memory is
considered device memory for data accesses).
* Code that fills device memory with null bytes.
Optionally, the following rule can be applied if performance is
important:
* Code zeroing small areas (few bytes) that are not secrets should use
memset to take advantage of compiler optimizations.
Note: Code zeroing security-related critical information should use
zero_normalmem/zeromem instead of memset to avoid removal by
compilers' optimizations in some cases or misbehaving versions of GCC.
Fixes ARM-software/tf-issues#408
Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
- If it applies (i.e. the CPU is affected by the errata), an INFO
message is printed, confirming that the errata workaround has been
applied.
- If it does not apply, a VERBOSE message is printed, confirming
that the errata workaround has been skipped.
- If an errata workaround is not enabled, but would have applied had
it been, a WARN message is printed, alerting that errata workaround
is missing.
The CPU errata messages are printed by both BL1 (primary CPU only) and
runtime firmware on debug builds, once for each CPU/errata combination.
Relevant output from Juno r1 console when ARM Trusted Firmware is built
with PLAT=juno LOG_LEVEL=50 DEBUG=1:
VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
INFO: BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
...
VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
INFO: BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
...
VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
INFO: BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
Douglas Raillard [Mon, 16 Jan 2017 11:32:28 +0000 (11:32 +0000)]
Add -fno-builtin to CFLAGS
Disable the automatic substitution of functions with builtins. The
existing -ffreestanding option should already do this but explicitly
adding -fno-builtin reduces the risk of compiler variation. With this
option, GCC is not supposed to be able to make assumptions on what the
function does, which could otherwise lead to security-sensitive code
removal.
This can lead to potentially less efficient code but improves
predictability of what code is actually compiled into the binary.
Change-Id: I06ad151c61318bd1b00d84976f051d2d94314acc Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
Masahiro Yamada [Sun, 25 Dec 2016 04:52:22 +0000 (13:52 +0900)]
fiptool: support --align option to add desired alignment to image offset
The current fiptool packs all the images without any padding between
them. So, the offset to each image has no alignment. This is not
efficient, for example, when the FIP is read from a block-oriented
device.
For example, (e)MMC is accessed by block-addressing. The block size
is 512 byte. So, the best case is each image is aligned by 512 byte
since the DMA engine can transfer the whole of the image to its load
address directly. The worst case is the offset does not have even
DMA-capable alignment (this is where we stand now). In this case,
we need to transfer every block to a bounce buffer, then do memcpy()
from the bounce buffer to our final destination. At least, this
should work with the abstraction by the block I/O layer, but the
CPU-intervention for the whole data transfer makes it really slow.
This commit adds a new option --align to the fiptool. This option,
if given, requests the tool to align each component in the FIP file
by the specified byte. Also, add a new Make option FIP_ALIGN for
easier access to this feature; users can give something like
FIP_ALIGN=512 from the command line, or add "FIP_ALIGN := 512" to
their platform.mk file.
Masahiro Yamada [Thu, 26 Jan 2017 18:56:58 +0000 (03:56 +0900)]
fiptool: embed fip_toc_entry in struct image
The struct image has "uuid" and "size" to memorize the field values
they had in the TOC entry. So, parse_fip() copies them from struct
fip_toc_entry to struct image, then pack_images() copies them back
to struct fip_toc_entry.
The next commit (support --align option) will require to save the
"offset" field as well. This makes me realize that struct image
can embed struct fip_toc_entry.
This commit will allow the "flags" field to persevere the "update"
command. At this moment, the "flags" is not used in a useful way.
(Yet, platforms can save their own parameters in the flags field.)
It makes sense to save it unless users explicitly replace the image.
Masahiro Yamada [Fri, 27 Jan 2017 02:57:54 +0000 (11:57 +0900)]
fiptool: revive replace_image() to keep the image order by update command
Commit e0f083a09b29 ("fiptool: Prepare ground for expanding the set
of images at runtime") introduced another side effect; the "update"
command now changes the image order in the FIP.
Let's say you have an FIP with BL2, BL31, BL32, BL33. If you update
for example, BL32 with the "update" command, you will get a new FIP
with BL2, BL31, BL33, BL32, in this order.
It happens like this; remove_image() removes the old image from the
linked list, add_image() adds the new image at the tail of the list,
then images are packed in the new order. Prior to that commit,
images were updated by replace_image(), but it was deleted by the
re-work. Revive replace_image() that is re-implemented to work with
the linked list.
Masahiro Yamada [Sun, 25 Dec 2016 03:41:41 +0000 (12:41 +0900)]
fiptool: fix existence check of FIP input file for update command
This line should check the existence of the input file, but it is
actually checking the output file. When -o option is given to the
"update" command, the outfile is unlikely to exist, then parse_fip()
is skipped and an empty FIP file is output. This is wrong behavior.
The current user guide mentions that Foundation model doesn't support
debugger interface. Clarify that all FVPs support --cadi-server option
such that a CADI-compliant debugger can connect to and control model
execution.
David Cunado [Thu, 19 Jan 2017 10:26:16 +0000 (10:26 +0000)]
Resolve build errors flagged by GCC 6.2
With GCC 6.2 compiler, more C undefined behaviour is being flagged as
warnings, which result in build errors in ARM TF build.
The specific issue that this patch resolves is the use of (1 << 31),
which is predominantly used in case statements, where 1 is represented
as a signed int. When shifted to msb the behaviour is undefined.
The resolution is to specify 1 as an unsigned int using a convenience
macro ULL(). A duplicate macro MAKE_ULL() is replaced.
Fixes ARM-software/tf-issues#438
Change-Id: I08e3053bbcf4c022ee2be33a75bd0056da4073e1 Signed-off-by: David Cunado <david.cunado@arm.com>
To avoid timing side-channel attacks, it is needed to use a constant
time memory comparison function when comparing hashes. The affected
code only cheks for equality so it isn't needed to use any variant of
memcmp(), bcmp() is enough.
Also, timingsafe_bcmp() is as fast as memcmp() when the two compared
regions are equal, so this change incurrs no performance hit in said
case. In case they are unequal, the boot sequence wouldn't continue as
normal, so performance is not an issue.
Change-Id: I1c7c70ddfa4438e6031c8814411fef79fd3bb4df Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Some side-channel attacks involve an attacker inferring something from
the time taken for a memory compare operation to complete, for example
when comparing hashes during image authentication. To mitigate this,
timingsafe_bcmp() must be used for such operations instead of the
standard memcmp().
This function executes in constant time and so doesn't leak any timing
information to the caller.
Change-Id: I470a723dc3626a0ee6d5e3f7fd48d0a57b8aa5fd Signed-off-by: dp-arm <dimitris.papastamos@arm.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
This code has been imported and slightly adapted from FreeBSD:
https://github.com/freebsd/freebsd/blob/6253393ad8df55730481bf2aafd76bdd6182e2f5/lib/libc/string/strnlen.c
This means only one of the two is defined. So, AARCH32/AARCH64
belongs to the latter group where we should use #ifdef or #ifndef.
The conditionals are mostly coded correctly, but I see some mistakes.
Masahiro Yamada [Thu, 12 Jan 2017 01:48:22 +0000 (10:48 +0900)]
Build: Fix parallel building
Soren reports build fails if -j option is given:
$ make -j16 CROSS_COMPILE=aarch64-linux-gnu-
Building fvp
make: *** No rule to make target 'build/fvp/release/bl1/',
needed by 'build/fvp/release/bl1/bl1.ld'. Stop.
make: *** Waiting for unfinished jobs....
The cause of the failure is that $(dir ) leaves a trailing / on the
directory names. It must be ripped off to let Make create the
directory.
There are some ways to fix the issue. Here, I chose to make MAKE_LD
look like MAKE_C and MAKE_S because bl*_dirs seems the central place
of making directories.
In mbedtls_x509_parser.c there are some static arrays that are filled
during the integrity check and then read whenever an authentication
parameter is requested. However, they aren't cleared in case of an
integrity check failure, which can be problematic from a security
point of view. This patch clears these arrays in the case of failure.
Change-Id: I9d48f5bc71fa13e5a75d6c45b5e34796ef13aaa2 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
Masahiro Yamada [Tue, 17 Jan 2017 17:10:08 +0000 (02:10 +0900)]
Use *_END instead of *_LIMIT for linker derived end addresses
The usage of _LIMIT seems odd here, so rename as follows:
BL_CODE_LIMIT --> BL_CODE_END
BL_RO_DATA_LIMIT --> BL_RO_DATA_END
BL1_CODE_LIMIT --> BL1_CODE_END
BL1_RO_DATA_LIMIT --> BL1_RO_DATA_END
Basically, we want to use _LIMIT and _END properly as follows:
*_SIZE + *_MAX_SIZE = *_LIMIT
*_SIZE + *_SIZE = *_END
The _LIMIT is generally defined by platform_def.h to indicate the
platform-dependent memory constraint. So, its typical usage is
ASSERT(. <= BL31_LIMIT, "BL31 image has exceeded its limit.")
in a linker script.
On the other hand, _END is used to indicate the end address of the
compiled image, i.e. we do not know it until the image is linked.
Here, all of these macros belong to the latter, so should be
suffixed with _END.
When generating the list of files to check by checkpatch.pl, the list
generated by `git ls-files` is filtered by a regular expression with
grep. Due to a malformed regex, the dot of `.md` was considered a
wildcard instead of a dot. This patch fixes this so that it matches
only dots, thus allowing the two following files to be checked:
Masahiro Yamada [Sat, 14 Jan 2017 14:22:02 +0000 (23:22 +0900)]
fiptool: fix add_image() and add_image_desc() implementation
The "make fip" shows the content of the generated FIP at the end of
the build. (This is shown by "fiptool info" command.)
Prior to commit e0f083a09b29 ("fiptool: Prepare ground for expanding
the set of images at runtime"), the last part of the build log of
make CROSS_COMPILE=aarch64-linux-gnu- BL33=../u-boot/u-boot.bin fip
was like follows:
You will notice two differences:
- the contents are displayed in BL33, BL31, BL2 order
- the offset values are wrong
The latter is more serious, and means "fiptool info" is broken.
Another interesting change is "fiptool update" every time reverses
the image order. For example, if you input FIP with BL2, BL31, BL33
in this order, the command will pack BL33, BL31, BL2 into FIP, in
this order. Of course, the order of components is not a big deal
except that users will have poor impression about this.
The root cause is in the implementation of add_image(); the
image_head points to the last added image. For example, if you call
add_image() for BL2, BL31, BL33 in this order, the resulted image
chain is:
image_head -> BL33 -> BL31 -> BL2
Then, they are processed from the image_head in "for" loops:
This means images are handled in Last-In First-Out manner.
Interestingly, "fiptool create" is still correct because
add_image_desc() also reverses the descriptor order and the command
works as before due to the double reverse.
The implementation of add_image() is efficient, but it made the
situation too complicated.
Let's make image_head point to the first added image. This will
add_image() inefficient because every call of add_image() follows
the ->next chain to get the tail. We can solve it by adopting a
nicer linked list structure, but I am not doing as far as that
because we handle only limited number of images anyway.
Do likewise for add_image_desc().
Fixes: e0f083a09b29 ("fiptool: Prepare ground for expanding the set of images at runtime") Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Soren Brinkmann [Fri, 6 Jan 2017 19:07:00 +0000 (11:07 -0800)]
zynqmp: Migrate to new address space macros
Commit 0029624fe2d4c327ac885d04d5933f82f38e7071 ("Add
PLAT_xxx_ADDR_SPACE_SIZE definition") deprecates 'ADDR_SPACE_SIZE' in
favor of PLAT_(PHY|VIRT)_ADDRESS_SPACE_SIZE. Migrate the zynqmp platform
to use the new interface.
Masahiro Yamada [Thu, 22 Dec 2016 05:02:27 +0000 (14:02 +0900)]
Build: generate .d file at the same time as object is created
Currently, .d files are generated before any objects are built.
So, IS_ANYTHING_TO_BUILD flag is needed to avoid such processing for
non-build targets.
There is a cleverer way; just create a .d file simultaneously when
the corresponding object is created. No need to have separate rules
for .d files.
This commit will also fix a bug; -D$(IMAGE) is defined for $(OBJ),
but not for $(PREREQUISITES). So, .d files are generated with
different macro sets from those for .o files, then wrong .d files
are generated.
For example, in lib/cpus/aarch64/cpu_helpers.S
#if IMAGE_BL31
#include <cpu_data.h>
#endif
<cpu_data.h> is parsed for the object when built for BL31, but the
.d file does not pick up that dependency.
With this commit, the compiler will generate .o and .d at the same
time, guaranteeing they are generated under the same circumstances.
dp-arm [Fri, 30 Dec 2016 09:55:48 +0000 (09:55 +0000)]
fiptool: Factor out setting of image descriptor action
An image descriptor contains an action and an argument. The action
indicates the intended operation, as requested by the user. It can be
pack, unpack or remove. Factor out setting those fields to a separate
function to minimize code duplication across the various commands that
modify these fields.
dp-arm [Thu, 3 Nov 2016 13:59:26 +0000 (13:59 +0000)]
fiptool: Add support for operating on binary blobs using the UUID
Previously, fiptool only understood a fixed set of images as
specified in tbbr_config.c. It preserved unknown images during
the update, unpack and remove operations but it was not possible to
explicitly refer to one of those unknown images.
Add a new --blob option to create/update/unpack/remove images that
are not known at compile time. This is accomplished by specifying
the UUID and filename pair as shown below:
dp-arm [Mon, 14 Nov 2016 15:54:32 +0000 (15:54 +0000)]
fiptool: Prepare ground for expanding the set of images at runtime
To allow operating on images with unknown UUIDs, fiptool needs to
be able to track an arbitrary amount of images and not be limited
to the set of images described by the builtin table.
Convert the table to a list to accommodate this scenario.
Douglas Raillard [Thu, 24 Nov 2016 15:43:19 +0000 (15:43 +0000)]
Abort preempted TSP STD SMC after PSCI CPU suspend
Standard SMC requests that are handled in the secure-world by the Secure
Payload can be preempted by interrupts that must be handled in the
normal world. When the TSP is preempted the secure context is stored and
control is passed to the normal world to handle the non-secure
interrupt. Once completed the preempted secure context is restored. When
restoring the preempted context, the dispatcher assumes that the TSP
preempted context is still stored as the SECURE context by the context
management library.
However, PSCI power management operations causes synchronous entry into
TSP. This overwrites the preempted SECURE context in the context
management library. When restoring back the SECURE context, the Secure
Payload crashes because this context is not the preempted context
anymore.
This patch avoids corruption of the preempted SECURE context by aborting
any preempted SMC during PSCI power management calls. The
abort_std_smc_entry hook of the TSP is called when aborting the SMC
request.
It also exposes this feature as a FAST SMC callable from normal world to
abort preempted SMC with FID TSP_FID_ABORT.
Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
Masahiro Yamada [Mon, 19 Dec 2016 08:41:47 +0000 (17:41 +0900)]
zynqmp: add "override" directive to mandatory options
The platform.mk sets build options required for ZynqMP, but users
can still change them from the command line, like:
make PLAT=zynqmp RESET_TO_BL31=0 CROSS_COMPILE=...
Then, the makefile shows an error message in that case:
Using BL31 as the reset vector is only one option supported on ZynqMP.
Please set RESET_TO_BL31 to 1.
If the option is not user-configurable, the makefile can specify
"override" to prevent users from changing it. We do not need the
error message for the case that never happens.
Likewise, ENABLE_PLAT_COMPAT := 0 and PROGRAMMABLE_RESET_ADDRESS := 1
are mandatory to avoid build error.
- Clarify the documentation of the 'FWU_SMC_IMAGE_COPY' SMC in the
Firmware Update guide. Also extend the list of pre-conditions to
include the additional input validation implemented by previous
patches.
- Improve documentation of bl1_plat_mem_check() in the porting
guide. It now specifies that the generic FWU code protects
bl1_plat_mem_check() from integer overflows resulting from
the addition of the base address and size passed in arguments.
Before adding a base address and a size to compute the end
address of an image to copy or authenticate, check this
won't result in an integer overflow. If it does then consider
the input arguments are invalid.
As a result, bl1_plat_mem_check() can now safely assume the
end address (computed as the sum of the base address and size
of the memory region) doesn't overflow, as the validation is
done upfront in bl1_fwu_image_copy/auth(). A debug assertion
has been added nonetheless in the ARM implementation in order
to help catching such problems, should bl1_plat_mem_check()
be called in a different context in the future.
Fixes TFV-1: Malformed Firmware Update SMC can result in copy
of unexpectedly large data into secure memory
Change-Id: I8b8f8dd4c8777705722c7bd0e8b57addcba07e25 Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com> Signed-off-by: Dan Handley <dan.handley@arm.com>
This patch refactors the code of the function handling a FWU_AUTH_COPY
SMC in BL1. All input validation has been moved upfront so it is now
shared between the RESET and COPYING states.
Change-Id: I6a86576b9ce3243c401c2474fe06f06687a70e2f Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com> Signed-off-by: Dan Handley <dan.handley@arm.com>