Paolo Bonzini [Mon, 21 Sep 2020 08:37:49 +0000 (04:37 -0400)]
configure: remove target configuration
The config-target.mak files are small constant, we can therefore just
write them down explicitly.
This removes a pretty large part of the configure script, including the
whole logic to detect which accelerators are supported by each target.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 21 Sep 2020 08:58:27 +0000 (04:58 -0400)]
configure: remove useless config-target.mak symbols
Omit symbols that are not needed by softmmu or bsd-user targets,
in preparation for moving the generated config-target.mak files
into the source tree.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 10:37:21 +0000 (06:37 -0400)]
configure: compute derivatives of target name in meson
Several CONFIG_* symbols in config-target.mak are easily computed from just
the target name. We do not need them in config-target.mak, and can instead
place them in the config_target dictionary only.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 10:52:12 +0000 (06:52 -0400)]
configure: remove dead variable
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 09:37:01 +0000 (05:37 -0400)]
configure: move accelerator logic to meson
Move to meson the code to detect the presence of accelerators, and
to define accelerator-specific config-target.h symbols.
The logic for now is duplicated in configure because it is still
in use to build the list of targets (which is in turn used to
create the config-target.mak files). The next patches remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 09:22:37 +0000 (05:22 -0400)]
configure: rewrite accelerator defaults as tests
Prepare to process "auto" in meson rather than configure: standardize the
shape of the code that changes "auto" to enabled/disabled, to ease the review
when it will be moved to meson.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 08:57:25 +0000 (04:57 -0400)]
configure: convert accelerator variables to meson options
Prepare for moving the tests to meson. For now they only have
enabled/disabled as the possible values when meson is invoked,
but "auto" will be a possibility later, when configure will only
parse the command line options.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 18 Sep 2020 10:06:09 +0000 (06:06 -0400)]
default-configs: move files to default-configs/devices/
Make room for target files in default-configs/targets/
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 23 Sep 2020 15:10:25 +0000 (11:10 -0400)]
travis: remove TCI test
TCI is already covered on gitlab CI, so we can remove it.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter Maydell [Fri, 2 Oct 2020 15:19:42 +0000 (16:19 +0100)]
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block layer patches:
- Add block export infrastructure
- iotests improvements
- Document the throttle block filter
- Misc code cleanups
# gpg: Signature made Fri 02 Oct 2020 15:36:53 BST
# gpg: using RSA key
DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg: issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (37 commits)
qcow2: Use L1E_SIZE in qcow2_write_l1_entry()
qemu-storage-daemon: Fix help line for --export
iotests: Test block-export-* QMP interface
iotests: Allow supported and unsupported formats at the same time
iotests: Introduce qemu_nbd_list_log()
iotests: Factor out qemu_tool_pipe_and_status()
nbd: Deprecate nbd-server-add/remove
nbd: Merge nbd_export_new() and nbd_export_create()
block/export: Move writable to BlockExportOptions
block/export: Add query-block-exports
block/export: Create BlockBackend in blk_exp_add()
block/export: Move blk to BlockExport
block/export: Add BLOCK_EXPORT_DELETED event
block/export: Add block-export-del
block/export: Move strong user reference to block_exports
block/export: Add 'id' option to block-export-add
block/export: Add blk_exp_close_all(_type)
block/export: Allocate BlockExport in blk_exp_add()
block/export: Add node-name to BlockExportOptions
block/export: Move AioContext from NBDExport to BlockExport
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Alberto Garcia [Mon, 28 Sep 2020 16:23:33 +0000 (18:23 +0200)]
qcow2: Use L1E_SIZE in qcow2_write_l1_entry()
We overlooked these in
02b1ecfa100e7ecc2306560cd27a4a2622bfeb04
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <
20200928162333.14998-1-berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Wed, 30 Sep 2020 13:39:09 +0000 (15:39 +0200)]
qemu-storage-daemon: Fix help line for --export
Commit
5f479a8d renamed the 'device' option of --export into
'node-name', but forgot to update the help in qemu-storage-daemon.
Fixes: 5f479a8dc086bfa42c9f94e9ab69962f256e207f
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200930133909.58820-1-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:17 +0000 (17:27 +0200)]
iotests: Test block-export-* QMP interface
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-32-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:16 +0000 (17:27 +0200)]
iotests: Allow supported and unsupported formats at the same time
This is useful for specifying 'generic' as supported (which includes
only writable image formats), but still excluding some incompatible
writable formats.
It also removes more lines than it adds.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-31-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:15 +0000 (17:27 +0200)]
iotests: Introduce qemu_nbd_list_log()
Add a function to list the NBD exports offered by an NBD server.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-30-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:14 +0000 (17:27 +0200)]
iotests: Factor out qemu_tool_pipe_and_status()
We have three almost identical functions that call an external process
and return its output and return code. Refactor them into small wrappers
around a common function.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-29-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:13 +0000 (17:27 +0200)]
nbd: Deprecate nbd-server-add/remove
These QMP commands are replaced by block-export-add/del.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-28-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:12 +0000 (17:27 +0200)]
nbd: Merge nbd_export_new() and nbd_export_create()
There is no real reason any more why nbd_export_new() and
nbd_export_create() should be separate functions. The latter only
performs a few checks before it calls the former.
What makes the current state stand out is that it's the only function in
BlockExportDriver that is not a static function inside nbd/server.c, but
a small wrapper in blockdev-nbd.c that then calls back into nbd/server.c
for the real functionality.
Move all the checks to nbd/server.c and make the resulting function
static to improve readability.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-27-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:11 +0000 (17:27 +0200)]
block/export: Move writable to BlockExportOptions
The 'writable' option is a basic option that will probably be applicable
to most if not all export types that we will implement. Move it from NBD
to the generic BlockExport layer.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-26-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:10 +0000 (17:27 +0200)]
block/export: Add query-block-exports
This adds a simple QMP command to query the list of block exports.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-25-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:09 +0000 (17:27 +0200)]
block/export: Create BlockBackend in blk_exp_add()
Every export type will need a BlockBackend, so creating it centrally in
blk_exp_add() instead of the .create driver callback avoids duplication.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-24-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:08 +0000 (17:27 +0200)]
block/export: Move blk to BlockExport
Every block export has a BlockBackend representing the disk that is
exported. It should live in BlockExport therefore.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-23-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:07 +0000 (17:27 +0200)]
block/export: Add BLOCK_EXPORT_DELETED event
Clients may want to know when an export has finally disappeard
(block-export-del returns earlier than that in the general case), so add
a QAPI event for it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <
20200924152717.287415-22-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:06 +0000 (17:27 +0200)]
block/export: Add block-export-del
Implement a new QMP command block-export-del and make nbd-server-remove
a wrapper around it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-21-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:05 +0000 (17:27 +0200)]
block/export: Move strong user reference to block_exports
The reference owned by the user/monitor that is created when adding the
export and dropped when removing it was tied to the 'exports' list in
nbd/server.c. Every block export will have a user reference, so move it
to the block export level and tie it to the 'block_exports' list in
block/export/export.c instead. This is necessary for introducing a QMP
command for removing exports.
Note that exports are present in block_exports even after the user has
requested shutdown. This is different from NBD's exports where exports
are immediately removed on a shutdown request, even if they are still in
the process of shutting down. In order to avoid that the user still
interacts with an export that is shutting down (and possibly removes it
a second time), we need to remember if the user actually still owns it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-20-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:04 +0000 (17:27 +0200)]
block/export: Add 'id' option to block-export-add
We'll need an id to identify block exports in monitor commands. This
adds one.
Note that this is different from the 'name' option in the NBD server,
which is the externally visible export name. While block export ids need
to be unique in the whole process, export names must be unique only for
the same server. Different export types or (potentially in the future)
multiple NBD servers can have the same export name externally, but still
need different block export ids internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-19-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:03 +0000 (17:27 +0200)]
block/export: Add blk_exp_close_all(_type)
This adds a function to shut down all block exports, and another one to
shut down the block exports of a single type. The latter is used for now
when stopping the NBD server. As soon as we implement support for
multiple NBD servers, we'll need a per-server list of exports and it
will be replaced by a function using that.
As a side effect, the BlockExport layer has a list tracking all existing
exports now. closed_exports loses its only user and can go away.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-18-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:02 +0000 (17:27 +0200)]
block/export: Allocate BlockExport in blk_exp_add()
Instead of letting the driver allocate and return the BlockExport
object, allocate it already in blk_exp_add() and pass it. This allows us
to initialise the generic part before calling into the driver so that
the driver can just use these values instead of having to parse the
options a second time.
For symmetry, move freeing the BlockExport to blk_exp_unref().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-17-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:01 +0000 (17:27 +0200)]
block/export: Add node-name to BlockExportOptions
Every block export needs a block node to export, so add a 'node-name'
option to BlockExportOptions and remove the replaced option 'device'
from BlockExportOptionsNbd.
To maintain compatibility in nbd-server-add, BlockExportOptionsNbd needs
to be wrapped by a new type NbdServerAddOptions that adds 'device' back
because nbd-server-add doesn't use the BlockExportOptions base type at
all (so even without changing it to a 'node-name' option in
block-export-add, this compatibility code would be necessary).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-16-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:27:00 +0000 (17:27 +0200)]
block/export: Move AioContext from NBDExport to BlockExport
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-15-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:59 +0000 (17:26 +0200)]
block/export: Move refcount from NBDExport to BlockExport
Having a refcount makes sense for all types of block exports. It is also
a prerequisite for keeping a list of all exports at the BlockExport
level.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-14-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:58 +0000 (17:26 +0200)]
nbd/server: Simplify export shutdown
Closing export is somewhat convoluted because nbd_export_close() and
nbd_export_put() call each other and the ways they actually end up being
nested is not necessarily obvious.
However, it is not really necessary to call nbd_export_close() from
nbd_export_put() when putting the last reference because it only does
three things:
1. Close all clients. We're going to refcount 0 and all clients hold a
reference, so we know there is no active client any more.
2. Close the user reference (represented by exp->name being non-NULL).
The same argument applies: If the export were still named, we would
still have a reference.
3. Freeing exp->description. This is really cleanup work to be done when
the export is finally freed. There is no reason to already clear it
while clients are still in the process of shutting down.
So after moving the cleanup of exp->description, the code can be
simplified so that only nbd_export_close() calls nbd_export_put(), but
never the other way around.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-13-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:57 +0000 (17:26 +0200)]
qemu-nbd: Use blk_exp_add() to create the export
With this change, NBD exports are now only created through the
BlockExport interface. This allows us finally to move things from the
NBD layer to the BlockExport layer if they make sense for other export
types, too.
blk_exp_add() returns only a weak reference, so the explicit
nbd_export_put() goes away.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-12-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:56 +0000 (17:26 +0200)]
nbd: Remove NBDExport.close callback
The export close callback is unused by the built-in NBD server. qemu-nbd
uses it only during shutdown to wait for the unrefed export to actually
go away. It can just use nbd_export_close_all() instead and do without
the callback.
This removes the close callback from nbd_export_new() and makes both
callers of it more similar.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-11-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:55 +0000 (17:26 +0200)]
nbd: Add writethrough to block-export-add
qemu-nbd allows use of writethrough cache modes, which mean that write
requests made through NBD will cause a flush before they complete.
Expose the same functionality in block-export-add.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-10-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:54 +0000 (17:26 +0200)]
nbd: Add max-connections to nbd-server-start
This is a QMP equivalent of qemu-nbd's --shared option, limiting the
maximum number of clients that can attach at the same time.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-9-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:53 +0000 (17:26 +0200)]
block/export: Remove magic from block-export-add
nbd-server-add tries to be convenient and adds two questionable
features that we don't want to share in block-export-add, even for NBD
exports:
1. When requesting a writable export of a read-only device, the export
is silently downgraded to read-only. This should be an error in the
context of block-export-add.
2. When using a BlockBackend name, unplugging the device from the guest
will automatically stop the NBD server, too. This may sometimes be
what you want, but it could also be very surprising. Let's keep
things explicit with block-export-add. If the user wants to stop the
export, they should tell us so.
Move these things into the nbd-server-add QMP command handler so that
they apply only there.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-8-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:52 +0000 (17:26 +0200)]
qemu-nbd: Use raw block driver for --offset
Instead of implementing qemu-nbd --offset in the NBD code, just put a
raw block node with the requested offset on top of the user image and
rely on that doing the job.
This does not only simplify the nbd_export_new() interface and bring it
closer to the set of options that the nbd-server-add QMP command offers,
but in fact it also eliminates a potential source for bugs in the NBD
code which previously had to add the offset manually in all relevant
places.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-7-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:51 +0000 (17:26 +0200)]
qemu-storage-daemon: Use qmp_block_export_add()
No reason to duplicate the functionality locally, we can now just reuse
the QMP command block-export-add for --export.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-6-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:50 +0000 (17:26 +0200)]
block/export: Add BlockExport infrastructure and block-export-add
We want to have a common set of commands for all types of block exports.
Currently, this is only NBD, but we're going to add more types.
This patch adds the basic BlockExport and BlockExportDriver structs and
a QMP command block-export-add that creates a new export based on the
given BlockExportOptions.
qmp_nbd_server_add() becomes a wrapper around qmp_block_export_add().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <
20200924152717.287415-5-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:49 +0000 (17:26 +0200)]
qapi: Rename BlockExport to BlockExportOptions
The name BlockExport will be used for the struct containing the runtime
state of block exports, so change the name of export creation options.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-4-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:48 +0000 (17:26 +0200)]
qapi: Create block-export module
Move all block export related types and commands from block-core to the
new QAPI module block-export.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-3-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Kevin Wolf [Thu, 24 Sep 2020 15:26:47 +0000 (17:26 +0200)]
nbd: Remove unused nbd_export_get_blockdev()
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <
20200924152717.287415-2-kwolf@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Dr. David Alan Gilbert [Mon, 24 Aug 2020 10:29:14 +0000 (11:29 +0100)]
qemu-io-cmds: Simplify help_oneline
help_oneline is declared and starts as:
static void help_oneline(const char *cmd, const cmdinfo_t *ct)
{
if (cmd) {
printf("%s ", cmd);
} else {
printf("%s ", ct->name);
if (ct->altname) {
printf("(or %s) ", ct->altname);
}
}
However, there are only two routes to help_oneline being called:
help_f -> help_all -> help_oneline(ct->name, ct)
help_f -> help_onecmd(argv[1], ct)
In the first case, 'cmd' and 'ct->name' are the same thing,
so it's impossible for the if (cmd) to be false and then validly
print ct->name - this is upsetting gcc
( https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96739 )
In the second case, cmd is argv[1] and we know we've got argv[1]
so again (cmd) is non-NULL.
Simplify help_oneline by just printing cmd.
(Also strengthen argc check just to be pedantic)
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <
20200824102914.105619-1-dgilbert@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Mon, 21 Sep 2020 17:30:16 +0000 (19:30 +0200)]
docs: Document the throttle block filter
This filter was added back in 2017 for QEMU 2.11 but it was never
properly documented, so let's explain how it works and add a couple of
examples.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-Id: <
20200921173016.27935-1-berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Thomas Huth [Fri, 18 Sep 2020 15:35:14 +0000 (17:35 +0200)]
tests/check-block: Do not run the iotests with old versions of bash
macOS is shipped with a very old version of the bash (3.2), which
is currently not suitable for running the iotests anymore (e.g.
it is missing support for "readarray" which is used in the file
tests/qemu-iotests/common.filter). Add a check to skip the iotests
in this case - if someone still wants to run the iotests on macOS,
they can install a newer version from homebrew, for example.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <
20200918153514.330705-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Philippe Mathieu-Daudé [Mon, 21 Sep 2020 11:01:45 +0000 (13:01 +0200)]
block/sheepdog: Replace magic val by NANOSECONDS_PER_SECOND definition
Use self-explicit NANOSECONDS_PER_SECOND definition instead
of magic value.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <
20200921110145.520944-1-philmd@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Peter Maydell [Fri, 2 Oct 2020 13:29:49 +0000 (14:29 +0100)]
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-
20201002' into staging
s390x update
- support extended sccb and diagnose 0x318
- implement additional instructions in tcg
- bug fixes
# gpg: Signature made Fri 02 Oct 2020 13:05:16 BST
# gpg: using RSA key
C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg: issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg: aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-
20201002:
s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA)
s390x/tcg: We support Miscellaneous-Instruction-Extensions Facility 2
s390x/tcg: Implement MULTIPLY SINGLE (MSC, MSGC, MSGRKC, MSRKC)
s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC)
s390x/tcg: Implement MULTIPLY HALFWORD (MGH)
s390x/tcg: Implement MULTIPLY (MG, MGRK)
s390x/tcg: Implement SUBTRACT HALFWORD (SGH)
s390x/tcg: Implement ADD HALFWORD (AGH)
s390x/cpumodel: S390_FEAT_MISC_INSTRUCTION_EXT -> S390_FEAT_MISC_INSTRUCTION_EXT2
vfio-ccw: plug memory leak while getting region info
s390x/tcg: Implement MONITOR CALL
s390: guest support for diagnose 0x318
s390/sclp: add extended-length sccb support for kvm guest
s390/sclp: use cpu offset to locate cpu entries
s390/sclp: check sccb len before filling in data
s390/sclp: read sccb from mem based on provided length
s390/sclp: rework sclp boundary checks
s390/sclp: get machine once during read scp/cpu info
hw/s390x/css: Remove double initialization
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Fri, 2 Oct 2020 12:39:20 +0000 (13:39 +0100)]
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-python-021020-1' into staging
Python testing updates:
- drop python 3.5 test from travis
- replace Debian 9 containers with 10
- increase cross build timeout
- bump minimum python version in configure
- move user plugins tests to gitlab
- split deprecated builds into build and test
# gpg: Signature made Fri 02 Oct 2020 12:34:36 BST
# gpg: using RSA key
6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-python-021020-1:
gitlab: split deprecated job into build/check stages
gitlab: move linux-user plugins test across to gitlab
configure: Bump the minimum required Python version to 3.6
gitlab-ci: Increase the timeout for the cross-compiler builds
tests/docker: Remove old Debian 9 containers
shippable.yml: Remove the Debian9-based MinGW cross-compiler tests
tests/docker: Update the tricore container to debian 10
gitlab-ci: Remove the Debian9-based containers and containers-layer3
tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI
travis.yml: Drop the Python 3.5 build
travis.yml: Drop the superfluous Python 3.6 build
travis.yml: Update Travis to use Bionic and Focal instead of Xenial
travis.yml: Drop the default softmmu builds
migration: Silence compiler warning in global_state_store_running()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
David Hildenbrand [Mon, 28 Sep 2020 12:27:17 +0000 (14:27 +0200)]
s390x/tcg: Implement CIPHER MESSAGE WITH AUTHENTICATION (KMA)
As with the other crypto functions, we only implement subcode 0 (query)
and no actual encryption/decryption. We now implement S390_FEAT_MSA_EXT_8.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:16 +0000 (14:27 +0200)]
s390x/tcg: We support Miscellaneous-Instruction-Extensions Facility 2
We implement all relevant instructions.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:15 +0000 (14:27 +0200)]
s390x/tcg: Implement MULTIPLY SINGLE (MSC, MSGC, MSGRKC, MSRKC)
We need new CC handling, determining the CC based on the intermediate
result (64bit for MSC and MSRKC, 128bit for MSGC and MSGRKC).
We want to store out2 ("low") after muls128 to r1, so add
"wout_out2_r1".
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:14 +0000 (14:27 +0200)]
s390x/tcg: Implement BRANCH INDIRECT ON CONDITION (BIC)
Just like BRANCH ON CONDITION - however the address is read from memory
(always 8 bytes are read), we have to wrap the address manually. The
address is read using current CPU DAT/address-space controls, just like
ordinary data.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:13 +0000 (14:27 +0200)]
s390x/tcg: Implement MULTIPLY HALFWORD (MGH)
Just like MULTIPLY HALFWORD IMMEDIATE (MGHI), only the second operand
(signed 16 bit) comes from memory.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:12 +0000 (14:27 +0200)]
s390x/tcg: Implement MULTIPLY (MG, MGRK)
Multiply two signed 64bit values and store the 128bit result in r1 (0-63)
and r1 + 1 (64-127).
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:11 +0000 (14:27 +0200)]
s390x/tcg: Implement SUBTRACT HALFWORD (SGH)
Easy to wire up.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:10 +0000 (14:27 +0200)]
s390x/tcg: Implement ADD HALFWORD (AGH)
Easy, just like ADD HALFWORD IMMEDIATE (AGHI).
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200928122717.30586-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
David Hildenbrand [Mon, 28 Sep 2020 12:27:09 +0000 (14:27 +0200)]
s390x/cpumodel: S390_FEAT_MISC_INSTRUCTION_EXT -> S390_FEAT_MISC_INSTRUCTION_EXT2
Let's avoid confusion with the "Miscellaneous-Instruction-Extensions
Facility 1"
Suggested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <
20200928122717.30586-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Cornelia Huck [Mon, 28 Sep 2020 10:17:01 +0000 (12:17 +0200)]
vfio-ccw: plug memory leak while getting region info
vfio_get_dev_region_info() unconditionally allocates memory
for a passed-in vfio_region_info structure (and does not re-use
an already allocated structure). Therefore, we have to free
the structure we pass to that function in vfio_ccw_get_region()
for every region we successfully obtained information for.
Fixes: 8fadea24de4e ("vfio-ccw: support async command subregion")
Fixes: 46ea3841edaf ("vfio-ccw: Add support for the schib region")
Fixes: f030532f2ad6 ("vfio-ccw: Add support for the CRW region and IRQ")
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <
20200928101701.13540-1-cohuck@redhat.com>
David Hildenbrand [Fri, 18 Sep 2020 08:51:22 +0000 (10:51 +0200)]
s390x/tcg: Implement MONITOR CALL
Recent upstream Linux uses the MONITOR CALL instruction for things like
BUG_ON() and WARN_ON(). We currently inject an operation exception when
we hit a MONITOR CALL instruction - which is wrong, as the instruction
is not glued to specific CPU features.
Doing a simple WARN_ON_ONCE() currently results in a panic:
[ 18.162801] illegal operation: 0001 ilc:2 [#1] SMP
[ 18.162889] Modules linked in:
[...]
[ 18.165476] Kernel panic - not syncing: Fatal exception: panic_on_oops
With a proper implementation, we now get:
[ 18.242754] ------------[ cut here ]------------
[ 18.242855] WARNING: CPU: 7 PID: 1 at init/main.c:1534 [...]
[ 18.242919] Modules linked in:
[...]
[ 18.246262] ---[ end trace
a420477d71dc97b4 ]---
[ 18.259014] Freeing unused kernel memory: 4220K
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <
20200918085122.26132-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:16 +0000 (15:44 -0400)]
s390: guest support for diagnose 0x318
DIAGNOSE 0x318 (diag318) is an s390 instruction that allows the storage
of diagnostic information that is collected by the firmware in the case
of hardware/firmware service events.
QEMU handles the instruction by storing the info in the CPU state. A
subsequent register sync will communicate the data to the hypervisor.
QEMU handles the migration via a VM State Description.
This feature depends on the Extended-Length SCCB (els) feature. If
els is not present, then a warning will be printed and the SCLP bit
that allows the Linux kernel to execute the instruction will not be
set.
Availability of this instruction is determined by byte 134 (aka fac134)
bit 0 of the SCLP Read Info block. This coincidentally expands into the
space used for CPU entries, which means VMs running with the diag318
capability may not be able to read information regarding all CPUs
unless the guest kernel supports an extended-length SCCB.
This feature is not supported in protected virtualization mode.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-9-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:14 +0000 (15:44 -0400)]
s390/sclp: add extended-length sccb support for kvm guest
As more features and facilities are added to the Read SCP Info (RSCPI)
response, more space is required to store them. The space used to store
these new features intrudes on the space originally used to store CPU
entries. This means as more features and facilities are added to the
RSCPI response, less space can be used to store CPU entries.
With the Extended-Length SCCB (ELS) facility, a KVM guest can execute
the RSCPI command and determine if the SCCB is large enough to store a
complete reponse. If it is not large enough, then the required length
will be set in the SCCB header.
The caller of the SCLP command is responsible for creating a
large-enough SCCB to store a complete response. Proper checking should
be in place, and the caller should execute the command once-more with
the large-enough SCCB.
This facility also enables an extended SCCB for the Read CPU Info
(RCPUI) command.
When this facility is enabled, the boundary violation response cannot
be a result from the RSCPI, RSCPI Forced, or RCPUI commands.
In order to tolerate kernels that do not yet have full support for this
feature, a "fixed" offset to the start of the CPU Entries within the
Read SCP Info struct is set to allow for the original 248 max entries
when this feature is disabled.
Additionally, this is introduced as a CPU feature to protect the guest
from migrating to a machine that does not support storing an extended
SCCB. This could otherwise hinder the VM from being able to read all
available CPU entries after migration (such as during re-ipl).
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-7-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:13 +0000 (15:44 -0400)]
s390/sclp: use cpu offset to locate cpu entries
The start of the CPU entry region in the Read SCP Info response data is
denoted by the offset_cpu field. As such, QEMU needs to begin creating
entries at this address.
This is in preparation for when Read SCP Info inevitably introduces new
bytes that push the start of the CPUEntry field further away.
Read CPU Info is unlikely to ever change, so let's not bother
accounting for the offset there.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-6-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:12 +0000 (15:44 -0400)]
s390/sclp: check sccb len before filling in data
The SCCB must be checked for a sufficient length before it is filled
with any data. If the length is insufficient, then the SCLP command
is suppressed and the proper response code is set in the SCCB header.
While we're at it, let's cleanup the length check by placing the
calculation inside a macro.
Fixes: 832be0d8a3bb ("s390x: sclp: Report insufficient SCCB length")
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-5-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:11 +0000 (15:44 -0400)]
s390/sclp: read sccb from mem based on provided length
The header contained within the SCCB passed to the SCLP service call
contains the actual length of the SCCB. Instead of allocating a static
4K size for the work sccb, let's allow for a variable size determined
by the value in the header. The proper checks are already in place to
ensure the SCCB length is sufficent to store a full response and that
the length does not cross any explicitly-set boundaries.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-4-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:10 +0000 (15:44 -0400)]
s390/sclp: rework sclp boundary checks
Rework the SCLP boundary check to account for different SCLP commands
(eventually) allowing different boundary sizes.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-3-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Collin Walling [Tue, 15 Sep 2020 19:44:09 +0000 (15:44 -0400)]
s390/sclp: get machine once during read scp/cpu info
Functions within read scp/cpu info will need access to the machine
state. Let's make a call to retrieve the machine state once and
pass the appropriate data to the respective functions.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20200915194416.107460-2-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Philippe Mathieu-Daudé [Mon, 7 Sep 2020 02:40:20 +0000 (04:40 +0200)]
hw/s390x/css: Remove double initialization
Fix eventual copy/paste mistake introduced in commit
bc994b74ea
("s390x/css: Use static initialization for channel_subsys fields").
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <
20200907024020.854465-1-philmd@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Alex Bennée [Fri, 2 Oct 2020 09:15:38 +0000 (10:15 +0100)]
gitlab: split deprecated job into build/check stages
While the job is pretty fast for only a few targets we still want to
catch breakage of the build. By splitting the test step we can
allow_failures for that while still ensuring we don't miss the build
breaking.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <
20201002091538.3017-1-alex.bennee@linaro.org>
Alex Bennée [Fri, 2 Oct 2020 10:32:23 +0000 (11:32 +0100)]
gitlab: move linux-user plugins test across to gitlab
Even with the recent split moving beefier plugins into contrib and
dropping them from the check-tcg tests we are still hitting time
limits. This possibly points to a slow down of --debug-tcg but seeing
as we are migrating stuff to gitlab we might as well move there and
bump the timeout.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <
20201002103223.24022-1-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:27 +0000 (16:40 +0100)]
configure: Bump the minimum required Python version to 3.6
All our supported build platforms have Python 3.6 or newer nowadays, and
there are some useful features in Python 3.6 which are not available in
3.5 yet (e.g. the type hint annotations which will allow us to statically
type the QAPI parser), so let's bump the minimum Python version to 3.6 now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <
20200923162908.95372-1-thuth@redhat.com>
Message-Id: <
20200925154027.12672-16-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:26 +0000 (16:40 +0100)]
gitlab-ci: Increase the timeout for the cross-compiler builds
Some of the cross-compiler builds (the mips build and the win64 build
for example) are quite slow and sometimes hit the 1h time limit.
Increase the limit a little bit to make sure that we do not get failures
in the CI runs just because of some few minutes.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <
20200921174320.46062-7-thuth@redhat.com>
Message-Id: <
20200925154027.12672-15-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:25 +0000 (16:40 +0100)]
tests/docker: Remove old Debian 9 containers
We do not support Debian 9 in QEMU anymore, and the Debian 9 containers
are now no longer used in the gitlab-CI. Time to remove them.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <
20200921174320.46062-6-thuth@redhat.com>
Message-Id: <
20200925154027.12672-14-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:24 +0000 (16:40 +0100)]
shippable.yml: Remove the Debian9-based MinGW cross-compiler tests
We're not supporting Debian 9 anymore, and we are now testing
MinGW cross-compiler builds in the gitlab-CI, too, so we do not
really need these jobs in the shippable.yml anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <
20200921174320.46062-5-thuth@redhat.com>
Message-Id: <
20200925154027.12672-13-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:23 +0000 (16:40 +0100)]
tests/docker: Update the tricore container to debian 10
We do not support Debian 9 anymore, thus update the Tricore container
to Debian 10 now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <
20200921174320.46062-4-thuth@redhat.com>
Message-Id: <
20200925154027.12672-12-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:22 +0000 (16:40 +0100)]
gitlab-ci: Remove the Debian9-based containers and containers-layer3
According to our support policy, Debian 9 is not supported by the
QEMU project anymore. Since we now switched the MinGW cross-compiler
builds to Fedora, we do not need these Debian9-based containers
in the gitlab-CI anymore, and can now also get rid of the "layer3"
container build stage this way.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <
20200921174320.46062-3-thuth@redhat.com>
Message-Id: <
20200925154027.12672-11-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:21 +0000 (16:40 +0100)]
tests/docker: Use Fedora containers for MinGW cross-builds in the gitlab-CI
According to our support policy, we do not support Debian 9 in QEMU
anymore, and we only support building the Windows binaries with a
very recent version of the MinGW toolchain. So we should not test
the MinGW cross-compilation with Debian 9 anymore, but switch to
something newer like Fedora. To do this, we need a separate Fedora
container for each build that provides the QEMU_CONFIGURE_OPTS
environment variable.
Unfortunately, the MinGW 64-bit compiler seems to be a little bit
slow, so we also have to disable some features like "capstone" in the
build here to make sure that the CI pipelines still finish within a
reasonable amount of time.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <
20200921174320.46062-2-thuth@redhat.com>
Message-Id: <
20200925154027.12672-10-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:20 +0000 (16:40 +0100)]
travis.yml: Drop the Python 3.5 build
We are soon going to remove the support for Python 3.5. So remove
the CI job now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <
20200922070441.48844-1-thuth@redhat.com>
Message-Id: <
20200925154027.12672-9-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:19 +0000 (16:40 +0100)]
travis.yml: Drop the superfluous Python 3.6 build
Python 3.6 is already the default Python in the jobs that are based
on Ubuntu Bionic, so it does not make much sense to test this again
separately.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <
20200918103430.297167-7-thuth@redhat.com>
Message-Id: <
20200925154027.12672-8-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:18 +0000 (16:40 +0100)]
travis.yml: Update Travis to use Bionic and Focal instead of Xenial
According to our support policy, we do not support Xenial anymore.
Time to switch the bigger parts of the builds to Focal instead.
Some few jobs have to be updated to Bionic instead, since they are
currently still failing on Focal otherwise. Also "--disable-pie" is
causing linker problems with newer versions of Ubuntu ... so remove
that switch from the jobs now (we still test it in a gitlab CI job,
so we don't lose much test coverage here).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <
20200918103430.297167-6-thuth@redhat.com>
Message-Id: <
20200925154027.12672-7-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:17 +0000 (16:40 +0100)]
travis.yml: Drop the default softmmu builds
The total runtime of all Travis jobs is very long and we are testing
all softmmu targets in the gitlab-CI already - so we can speed up the
Travis testing a little bit by not testing the softmmu targets here
anymore.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <
20200918103430.297167-5-thuth@redhat.com>
Message-Id: <
20200925154027.12672-6-alex.bennee@linaro.org>
Thomas Huth [Fri, 25 Sep 2020 15:40:16 +0000 (16:40 +0100)]
migration: Silence compiler warning in global_state_store_running()
GCC 9.3.0 on Ubuntu complains:
In file included from /usr/include/string.h:495,
from /home/travis/build/huth/qemu/include/qemu/osdep.h:87,
from ../migration/global_state.c:13:
In function ‘strncpy’,
inlined from ‘global_state_store_running’ at ../migration/global_state.c:47:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: error:
‘__builtin_strncpy’ specified bound 100 equals destination size [-Werror=stringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... but we apparently really want to do a strncpy here - the size is already
checked with the assert() statement right in front of it. To silence the
warning, simply replace it with our strpadcpy() function.
Suggested-by: Philippe Mathieu-Daudé <philmd@redhat.com> (two years ago)
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <
20200918103430.297167-4-thuth@redhat.com>
Message-Id: <
20200925154027.12672-5-alex.bennee@linaro.org>
Peter Maydell [Thu, 1 Oct 2020 18:55:10 +0000 (19:55 +0100)]
Merge remote-tracking branch 'remotes/jsnow-gitlab/tags/ide-pull-request' into staging
Pull request
# gpg: Signature made Thu 01 Oct 2020 18:41:05 BST
# gpg: using RSA key
F9B7ABDBBCACDF95BE76CBD07DEF8106AAFC390E
# gpg: Good signature from "John Snow (John Huston) <jsnow@redhat.com>" [full]
# Primary key fingerprint: FAEB 9711 A12C F475 812F 18F2 88A9 064D 1835 61EB
# Subkey fingerprint: F9B7 ABDB BCAC DF95 BE76 CBD0 7DEF 8106 AAFC 390E
* remotes/jsnow-gitlab/tags/ide-pull-request:
ide: cancel pending callbacks on SRST
ide: clear interrupt on command write
ide: remove magic constants from the device register
ide: reorder set/get sector functions
ide: model HOB correctly
ide: don't tamper with the device register
ide: rename cmd_write to ctrl_write
hw/ide/ahci: Do not dma_memory_unmap(NULL)
MAINTAINERS: Update my git address
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
John Snow [Fri, 24 Jul 2020 05:23:00 +0000 (01:23 -0400)]
ide: cancel pending callbacks on SRST
The SRST implementation did not keep up with the rest of IDE; it is
possible to perform a weak reset on an IDE device to remove the BSY/DRQ
bits, and then issue writes to the control/device registers which can
cause chaos with the state machine.
Fix that by actually performing a real reset.
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: https://bugs.launchpad.net/qemu/+bug/1878253
Fixes: https://bugs.launchpad.net/qemu/+bug/1887303
Fixes: https://bugs.launchpad.net/qemu/+bug/1887309
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:59 +0000 (01:22 -0400)]
ide: clear interrupt on command write
Not known to fix any bug, but I couldn't help but notice that ATA
specifies that writing to this register should clear an interrupt.
ATA7: Section 5.3.3 (Command register - Effect)
ATA6: Section 7.4.4 (Command register - Effect)
ATA5: Section 7.4.4 (Command register - Effect)
ATA4: Section 7.4.4 (Command register - Effect)
ATA3: Section 5.2.2 (Command register)
Other editions: try searching for the phrase "Writing this register".
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:58 +0000 (01:22 -0400)]
ide: remove magic constants from the device register
(In QEMU, we call this the "select" register.)
My memory isn't good enough to memorize what these magic runes
do. Label them to prevent mixups from happening in the future.
Side note: I assume it's safe to always set 0xA0 even though ATA2 claims
these bits are reserved, because ATA3 immediately reinstated that these
bits should be always on. ATA4 and subsequent specs only claim that the
fields are obsolete, so I assume it's safe to leave these set and that
it should work with the widest array of guests.
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:57 +0000 (01:22 -0400)]
ide: reorder set/get sector functions
Reorder these just a pinch to make them more obvious at a glance what
the addressing mode is.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:56 +0000 (01:22 -0400)]
ide: model HOB correctly
I have been staring at this FIXME for years and I never knew what it
meant. I finally stumbled across it!
When writing to the command registers, the old value is shifted into a
HOB copy of the register and the new value is written into the primary
register. When reading registers, the value retrieved is dependent on
the HOB bit in the CONTROL register.
By setting bit 7 (0x80) in CONTROL, any register read will, if it has
one, yield the HOB value for that register instead.
Our code has a problem: We were using bit 7 of the DEVICE register to
model this. We use bus->cmd roughly as the control register already, as
it stores the value from ide_ctrl_write.
Lastly, all command register writes reset the HOB, so fix that, too.
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:55 +0000 (01:22 -0400)]
ide: don't tamper with the device register
In real ISA operation, register writes go out to an entire bus channel
and all listening devices receive the write. The devices do not toggle
the DEV bit based on their own configuration, nor does the HBA
intermediate or tamper with that value.
The reality of the matter is that DEV0/DEV1 accordingly will react to
command register writes based on whether or not the device was selected.
This does not fix a known bug, but it makes the code slightly simpler
and more obvious.
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Fri, 24 Jul 2020 05:22:54 +0000 (01:22 -0400)]
ide: rename cmd_write to ctrl_write
It's the Control register, part of the Control block -- Command is
misleading here. Rename all related functions and constants.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Philippe Mathieu-Daudé [Sat, 18 Jul 2020 07:28:54 +0000 (09:28 +0200)]
hw/ide/ahci: Do not dma_memory_unmap(NULL)
libFuzzer triggered the following assertion:
cat << EOF | qemu-system-i386 -M pc-q35-5.0 \
-nographic -monitor none -serial none -qtest stdio
outl 0xcf8 0x8000fa24
outl 0xcfc 0xe1068000
outl 0xcf8 0x8000fa04
outw 0xcfc 0x7
outl 0xcf8 0x8000fb20
write 0xe1068304 0x1 0x21
write 0xe1068318 0x1 0x21
write 0xe1068384 0x1 0x21
write 0xe1068398 0x2 0x21
EOF
qemu-system-i386: exec.c:3621: address_space_unmap: Assertion `mr != NULL' failed.
Aborted (core dumped)
This is because we don't check the return value from dma_memory_map()
which can return NULL, then we call dma_memory_unmap(NULL) which is
illegal. Fix by only unmap if the value is not NULL (and the size is
not the expected one).
Cc: qemu-stable@nongnu.org
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200718072854.7001-1-f4bug@amsat.org
Fixes: f6ad2e32f8 ("ahci: add ahci emulation")
BugLink: https://bugs.launchpad.net/qemu/+bug/1884693
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
John Snow [Thu, 1 Oct 2020 16:24:01 +0000 (12:24 -0400)]
MAINTAINERS: Update my git address
I am switching from github to gitlab.
Signed-off-by: John Snow <jsnow@redhat.com>
Peter Maydell [Thu, 1 Oct 2020 15:41:30 +0000 (16:41 +0100)]
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-
20201001' into staging
target-arm queue:
* Make isar_feature_aa32_fp16_arith() handle M-profile
* Fix SVE splice
* Fix SVE LDR/STR
* Remove ignore_memory_transaction_failures on the raspi2
* raspi: Various cleanup/refactoring
# gpg: Signature made Thu 01 Oct 2020 15:46:47 BST
# gpg: using RSA key
E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-
20201001:
hw/arm/raspi: Remove use of the 'version' value in the board code
hw/arm/raspi: Use RaspiProcessorId to set the firmware load address
hw/arm/raspi: Introduce RaspiProcessorId enum
hw/arm/raspi: Use more specific machine names
hw/arm/raspi: Avoid using TypeInfo::class_data pointer
hw/arm/raspi: Move arm_boot_info structure to RaspiMachineState
hw/arm/raspi: Load the firmware on the first core
hw/arm/raspi: Display the board revision in the machine description
hw/arm/raspi: Remove ignore_memory_transaction_failures on the raspi2
hw/arm/bcm2835: Add more unimplemented peripherals
hw/arm/raspi: Define various blocks base addresses
target/arm: Fix SVE splice
target/arm: Fix sve ldr/str
target/arm: Make isar_feature_aa32_fp16_arith() handle M-profile
target/arm: Add ID register values for Cortex-M0
hw/intc/armv7m_nvic: Only show ID register values for Main Extension CPUs
target/arm: Move id_pfr0, id_pfr1 into ARMISARegisters
target/arm: Replace ARM_FEATURE_PXN with ID_MMFR0.VMSA check
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:08 +0000 (13:18 +0200)]
hw/arm/raspi: Remove use of the 'version' value in the board code
We expected the 'version' ID to match the board processor ID,
but this is not always true (for example boards with revision
id 0xa02042/0xa22042 are Raspberry Pi 2 with a BCM2837 SoC).
This was not important because we were not modelling them, but
since the recent refactor now allow to model these boards, it
is safer to check the processor id directly. Remove the version
check.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-9-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:07 +0000 (13:18 +0200)]
hw/arm/raspi: Use RaspiProcessorId to set the firmware load address
The firmware load address depends on the SoC ("processor id") used,
not on the version of the board.
Suggested-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-8-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:06 +0000 (13:18 +0200)]
hw/arm/raspi: Introduce RaspiProcessorId enum
As we only support a reduced set of the REV_CODE_PROCESSOR id
encoded in the board revision, define the PROCESSOR_ID values
as an enum. We can simplify the board_soc_type and cores_count
methods.
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-7-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:05 +0000 (13:18 +0200)]
hw/arm/raspi: Use more specific machine names
Now that we can instantiate different machines based on their
board_rev register value, we can have various raspi2 and raspi3.
In commit
fc78a990ec103 we corrected the machine description.
Correct the machine names too. For backward compatibility, add
an alias to the previous generic name.
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-6-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:04 +0000 (13:18 +0200)]
hw/arm/raspi: Avoid using TypeInfo::class_data pointer
Using class_data pointer to create a MachineClass is not
the recommended way anymore. The correct way is to open-code
the MachineClass::fields in the class_init() method.
We can not use TYPE_RASPI_MACHINE::class_base_init() because
it is called *before* each machine class_init(), therefore the
board_rev field is not populated. We have to manually call
raspi_machine_class_common_init() for each machine.
This partly reverts commit
a03bde3674e.
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-5-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:03 +0000 (13:18 +0200)]
hw/arm/raspi: Move arm_boot_info structure to RaspiMachineState
The arm_boot_info structure belong to the machine,
move it to RaspiMachineState.
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-4-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Philippe Mathieu-Daudé [Thu, 24 Sep 2020 11:18:02 +0000 (13:18 +0200)]
hw/arm/raspi: Load the firmware on the first core
The 'first_cpu' is more a QEMU accelerator-related concept
than a variable the machine requires to use.
Since the machine is aware of its CPUs, directly use the
first one to load the firmware.
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id:
20200924111808.77168-3-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>