This repository has been archived by the owner on Jun 7, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 13
/
INSTALL.txt
514 lines (408 loc) · 21.1 KB
/
INSTALL.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
Prerequisites:
==============
0. Ubuntu 16.04 should be used as host OS for build.
1. Install tools : apt-get install gawk wget diffstat texinfo chrpath socat libsdl1.2-dev \
python-crypto repo checkpolicy python-git python-github \
python-ctypeslib bzr pigz m4 lftp openjdk-8-jdk git-core \
gnupg flex bison gperf build-essential zip curl zlib1g-dev \
gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev \
x11proto-core-dev libx11-dev lib32z-dev ccache libgl1-mesa-dev \
libxml2-utils xsltproc unzip python-clang-5.0 gcc-5 g++-5 bc \
python3-pyelftools python3-pip gcc-aarch64-linux-gnu python3-crypto -y
2. Install package pycryptodome: sudo pip3 install pycryptodomex
3. Download and install Google repo: https://source.android.com/setup/build/downloading#installing-repo
4. Checked with Python v 2.7.12, but other should also work
About:
======
Prod-devel product is based on Renesas BSP, Xen hypervisor and Android. It contains a set of features required to have
both Display and GPU support in several domains at the same time (device passthrough, PV Display, PV Sound, VGPU, etc).
There are three domains which are running on top of Xen:
1. Generic machine independent control domain named "Domain-0". This initramfs based domain is responsible
for managing VMs only (create/destroy/reboot guest domains). There is no HW assigned to this domain.
2. Machine dependent driver domain named "DomD". Almost all HW are assigned to this domain.
As this domain is not 1:1 mapped, Xen controls IPMMU to do a proper address translations (IPA to PA) for DMA masters.
It also contains different para-virtualized (PV) backends running inside it in order to provide guest domains
(without HW assigned) with different services such as Audio, Display, Net, Block, etc.
3. Either Android based domain named "DomA" or Linux based domain named "DomU".
Both have different PV frontends running and don't have any HW assigned except GPU.
The latter is shared between "DomD" and "DomA" ("DomU") in current setup.
Build:
======
Our build system uses set of additional meta layers and tools.
1. Please, clone the following build scripts, master branch:
git clone https://github.com/xen-troops/build-scripts.git
cd build-scripts
2. In the build-scripts directory you will find a sample configuration
file gcp-build-server-devel.cfg:
cp gcp-build-server-devel.cfg xt-prod-devel.cfg
3. Edit it to fit your environment.
3.1 Please, change the variable under [git] section:
xt_manifest_uri = https://github.com/xen-troops/meta-xt-products.git
3.2 Please, change the variables under [path] section:
- workspace_base_dir: change it to point to the place where the build will happen
- workspace_storage_base_dir: change it to where downloads and other files will be put
- workspace_cache_base_dir: it points to state cache
For example,
workspace_base_dir = /home/workspace_base
workspace_storage_base_dir = /home/workspace_storage_base
workspace_cache_base_dir = /home/workspace_cache
3.3 Please, change the variables under [local_conf] section:
3.3.1 The XT_GUESTS_INSTALL and XT_GUESTS_BUILD variables select the guest domain
to be built and installed.
For example (system with DomU),
XT_GUESTS_INSTALL = "domu"
XT_GUESTS_BUILD = "domu"
For example (system with DomA),
XT_GUESTS_INSTALL = "doma"
XT_GUESTS_BUILD = "doma"
3.3.2 The XT_RCAR_EVAPROPRIETARY_DIR (DomD and DomU), XT_ANDROID_PREBUILDS_DIR (DomA) variables
point to the place where prebuilt EVA proprietary "graphics" packages (hereafter "graphics" packages) will be copied in step #4.
If you are going to build Graphics DDK from sources (and don't use "graphics" packages), please remove that variables.
In that case step #4 can be omitted.
3.3.3 The XT_RCAR_PROPRIETARY_MULTIMEDIA_DIR (DomD) variable points to the place where proprietary "multimedia" packages
(hereafter "multimedia" packages) will be copied in step #5.
If you are not going to use multimedia in DomD, please remove that variable. In that case step #5 can be omitted.
3.3.4 Building VIS server
To use the VIS server, you need to set variable XT_USE_VIS_SERVER to any not empty value.
After that, you have two mutually exclusive options - use prebuilt binary or use proprietary sources.
Prebuilt binary:
AOS_VIS_PACKAGE_DIR should specify folder containing file `aos-vis`.
This file is renamed `aos-vis_git-r0_aarch64.ipk`.
So, you can put `aos-vis_git-r0_aarch64.ipk` into AOS_VIS_PACKAGE_DIR and rename it or create a link to it:
```
ln -sfr aos-vis_git-r0_aarch64.ipk aos-vis
```
Proprietary sources:
AOS_VIS_PLUGINS allows to specify VIS data provider.
You can choose "telemetryemulator" or "renesassimulator".
The "telemetryemulator" plugin uses prerecorded data provided by telemetryemulator service launched in DomD.
The "renesassimulator" plugin uses data provided by the Renesas R-Car simulator.
If AOS_VIS_PLUGINS is not specified then "renesassimulatoradapter" will be used.
So, VIS related lines in config may look like
```
XT_USE_VIS_SERVER = "true"
AOS_VIS_PLUGINS = "renesassimulatoradapter"
```
or
```
XT_USE_VIS_SERVER = "true"
AOS_VIS_PLUGINS = "telemetryemulator"
```
or
```
XT_USE_VIS_SERVER = "true"
AOS_VIS_PACKAGE_DIR = "/home/me/prebuilt_vis/"
```
If you specify both variables AOS_VIS_PACKAGE_DIR and AOS_VIS_PLUGINS in the same config
then AOS_VIS_PACKAGE_DIR will be used.
4. Download and copy "graphics" packages to your local filesystem at some directory:
4.1 DomA prebuilts:
- rcar-prebuilts-salvator-xs-h3-xt-doma.tar.gz
Unpack it to some "..PREBUILTS_FOLDER_PATH.."
I.e. you should have such directories structure:
"..ANDROID_PREBUILTS_FOLDER_PATH.."
├── pvr-km
│ └── pvrsrvkm.ko
└── pvr-um
└── <r8a7795 or r8a7796>
├── prebuilds.mk
└── vendor
├── bin
│ ├── hwperfbin2jsont
│ ├── pvrhwperf
│ ├── pvrlogdump
│ ├── pvrlogsplit
│ ├── pvrsrvctl
│ └── rscompiler
├── etc
│ ├── firmware
│ │ ├── rgx.fw.4.46.6.62
│ │ └── rgx.fw.4.46.6.62.vz
│ └── powervr.ini
├── lib
│ ├── egl
│ │ ├── libEGL_POWERVR_ROGUE.so
│ │ ├── libGLESv1_CM_POWERVR_ROGUE.so
│ │ └── libGLESv2_POWERVR_ROGUE.so
│ ├── hw
│ │ ├── gralloc.r8a7795.so
│ │ ├── memtrack.r8a7795.so
│ │ └── vulkan.r8a7795.so
│ ├── libAppHintsIPC.so
│ ├── libglslcompiler.so
│ ├── libIMGegl.so
│ ├── libPVRRS.so
│ ├── libPVRScopeServices.so
│ ├── libsrv_um.so
│ ├── libufwriter.so
│ ├── libusc.so
│ └── [email protected]
└── lib64
├── egl
│ ├── libEGL_POWERVR_ROGUE.so
│ ├── libGLESv1_CM_POWERVR_ROGUE.so
│ └── libGLESv2_POWERVR_ROGUE.so
├── hw
│ ├── gralloc.r8a7795.so
│ ├── memtrack.r8a7795.so
│ └── vulkan.r8a7795.so
├── libAppHintsIPC.so
├── libglslcompiler.so
├── libIMGegl.so
├── libPVRRS.so
├── libPVRScopeServices.so
├── libsrv_um.so
├── libufwriter.so
├── libusc.so
XT_ANDROID_PREBUILDS_DIR should point to "..ANDROID_PREBUILTS_FOLDER_PATH.." folder.
4.2 DomD and DomU (if latter is used instead of DomA) prebuilts:
For example (Salvator-X board with H3 ES3.0 SoC installed, 8GB RAM),
- rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domd.tar.gz
- rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domu.tar.gz
Copy it to "..PROPRIETARY_FOLDER_PATH.." folder.
XT_RCAR_EVAPROPRIETARY_DIR should point to "..PROPRIETARY_FOLDER_PATH.." folder.
Due to the fact that the SoC is not actually changed for Salvator-XS 8GB and H3ULCB(-KF|-AB) 8GB boards (H3 ES3.0),
you could reuse already provided binaries by making symlinks to corresponding archives.
For example (Salvator-XS),
ln -s rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domd.tar.gz \
rcar-proprietary-graphic-salvator-xs-h3-4x2g-xt-domd.tar.gz
ln -s rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domu.tar.gz \
rcar-proprietary-graphic-salvator-xs-h3-4x2g-xt-domu.tar.gz
For example (H3ULCB),
ln -s rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domd.tar.gz \
rcar-proprietary-graphic-h3ulcb-4x2g-xt-domd.tar.gz
ln -s rcar-proprietary-graphic-salvator-x-h3-4x2g-xt-domu.tar.gz \
rcar-proprietary-graphic-h3ulcb-4x2g-xt-domu.tar.gz
5. Only for the multimedia usage.
Follow the procedure described in "Preliminary steps #1" from https://elinux.org/R-Car/Boards/Yocto-Gen3
to download Multimedia and Graphics library and related Linux drivers. These software packages
should be stored at the directory pointed by XT_RCAR_PROPRIETARY_MULTIMEDIA_DIR variable.
Please note, although only "multimedia" packages will be used here, there is no need
to re-pack package archives in order to remove "graphics" packages, unneeded packages
will be just skipped.
6. Run the build script for current stable release:
python ./build_prod.py --build-type dailybuild --machine MACHINE_NAME --product PRODUCT_NAME \
--branch RELEASE_NAME --with-local-conf --config xt-prod-devel.cfg --with-do-build
Where the PRODUCT_NAME depends on whether "graphics" packages are used or not:
- devel (to use "graphics" packages)
- devel-src (to build Graphics DDK from sources)
Where the RELEASE_NAME name is the name of the release to be built.
This is done in order to track product releases easily and have a possibility to build any release
just by pointing a proper release name.
Please note, if "branch" parameter is not set then the current product's snapshot will be built.
List of supported MACHINE_NAMEs:
MACHINE_NAME Board name SoC RAM prod-devel-unsafe only? DomA supported?
------------------------------------------------------------------------------------------------------
salvator-x-h3 Salvator-X H3 ES2.0 4GB prod-devel-unsafe only
salvator-x-h3-4x2g Salvator-X H3 ES3.0 8GB
salvator-x-m3 Salvator-X M3 4GB prod-devel-unsafe only Not supported for DomA
salvator-xs-m3-2x4g Salvator-XS M3 ES3.0 8GB
salvator-xs-h3 Salvator-XS H3 ES2.0 4GB prod-devel-unsafe only
salvator-xs-h3-4x2g Salvator-XS H3 ES3.0 8GB
salvator-xs-m3n Salvator-XS M3N 2GB Not supported for DomA
h3ulcb H3ULCB H3 ES2.0 4GB prod-devel-unsafe only
m3ulcb M3ULCB M3 2GB prod-devel-unsafe only Not supported for DomA
h3ulcb-4x2g H3ULCB H3 ES3.0 8GB
h3ulcb-4x2g-kf H3ULCB KingFisher H3 ES3.0 8GB
7. You are done. The artifacts of the build are located at workspace_base directory:
workspace_base/build/build/deploy/
├── dom0-image-thin-initramfs
│ └── images
│ └── generic-armv8-xt
│
├── domd-image-weston
│ └── images
│ └── MACHINE_NAME-xt
│
├── domu-image-android
│ └── images
│ └── qemux86-64
│ ├── boot.img
│ ├── Image -> vmlinux
│ ├── system.img
│ ├── userdata.img
│ ├── vendor.img
│ └── vmlinux
│
│ - or -
│
├── domu-image-weston
│ └── images
│ └── MACHINE_NAME-xt
│
└── mk_sdcard_image.sh
Images are located at:
- Domain-0:
workspace_base/build/build/deploy/dom0-image-thin-initramfs/images/generic-armv8-xt
Here we get a part of boot images:
- uInitramfs - thin-initramfs for Domain-0
- Image - Kernel image for Domain-0
- DomD:
workspace_base/build/build/deploy/domd-image-weston/images/MACHINE-NAME-xt
Here we get a part of boot images, all bootloader images and rootfs image for DomD:
- xen-uImage - Xen main image
- xenpolicy - special image for Xen usage
- dom0.dtb - device-tree image for Domain-0
- bootloader images in both binary and srec formats.
- core-image-weston-MACHINE_NAME-xt.tar.bz2 - rootfs image for DomD
- DomA:
workspace_base/build/build/deploy/domu-image-android/images/qemux86-64
Here we get a rootfs image for DomU:
- Image -> vmlinux - Kernel image for DomA
- boot.img - android boot image(won't be used further)
- system.img - android system image
- vendor.img - android vendor image
- userdata.img - android vendor image
- DomU:
workspace_base/build/build/deploy/domu-image-weston/images/MACHINE-NAME-xt
Here we get a rootfs image for DomU:
- core-image-weston-MACHINE_NAME-xt.tar.bz2 - rootfs image for DomU
Build logs are located at:
- Domain-0:
workspace_base/build/build/log/dom0-image-thin-initramfs/cooker/generic-armv8-xt
- DomD:
workspace_base/build/build/log/domd-image-weston/cooker/MACHINE-NAME-xt
- DomA:
workspace_base/build/build/log/domu-image-android/cooker/qemux86-64
- DomU:
workspace_base/build/build/log/domu-image-weston/cooker/MACHINE-NAME-xt
8. If one wants to build any domain's images by hand, at the time of development
for instance, it is possible by going into desired directory and using poky to build:
- For building Domain-0:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/
- For building DomD:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/domd-image-weston/1.0-r0/repo/
- For building DomA:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/domu-image-android/1.0-r0/repo/
- For building DomU:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/domu-image-weston/1.0-r0/repo/
source poky/oe-init-build-env
- For building Domain-0:
bitbake core-image-thin-initramfs
- For building DomD or DomU:
bitbake core-image-weston
- For building DomA:
bitbake android
Usage:
======
It is possible to run system either using TFTP boot with NFS root or keeping
all images on a storage device such as eMMC or SD card.
Different helpers scripts and docs are located at:
build-workspace/build/meta-xt-prod-devel/doc
Let's consider available boot options in details.
1. Using a storage device.
In order to boot system using a storage device, required storage device should be prepared
and flashed beforehand. The mk_sdcard_image.sh script is intended to help with that:
sudo ./mk_sdcard_image.sh -p /IMAGE_FOLDER -d /IMAGE_FILE -c devel
Where, IMAGE_FOLDER is a path to a folder where artifacts live (in the context of this document
it is a "deploy" directory) and IMAGE_FILE is an output image file or physical device how
it is appears in the filesystem (/dev/sdx). As far as script intended to support different products,
we need to specify '-c devel' for this product.
1.1 In case of SDx card booting we just have to insert SD card to a Host machine and run a script,
the latter will do all required actions automatically. All what we need to care about is to write
proper environment variables from U-Boot command line (boot_dev/bootcmd) according to the chosen SDx.
See workspace_base/meta-xt-prod-gen3-test/doc/u-boot-env-salvator-x.txt for details.
1.2 In case of eMMC booting, we have to have an access to it in order to flash images.
It is going to be not quite easy as for removable SD card, but the one of the possible ways is
to prepare the image blob using the same script, copy resulting blob to NFS root directory,
set system to boot via NFS, go to a DomD on target (where eMMC device is available)
and using "dd" command just copy blob to eMMC.
For example,
prepare an image blob:
- For system with DomA:
sudo ./mk_sdcard_image.sh -p /IMAGE_FOLDER -d /home/emmc.img -c devel
- For system with DomU:
sudo ./mk_sdcard_image.sh -p /IMAGE_FOLDER -d /home/emmc.img -c gen3
and then run on target:
dd if=/home/emmc.img of=/dev/mmcblk0
After getting eMMC flashed we have to choose it to be an boot device in a similar way as it is done
for SD card.
See u-boot-env.txt for details.
2. Using TFTP boot with NFS root.
In case of TFTP booting we have to copy the following boot images into target-accessible TFTP boot
directory and extract guest domain rootfs images for being an NFS root directories for guest domains.
See https://github.com/xen-troops/manifests/wiki on how to setup TFTP, NFS, etc.
For example (for system with DomU),
copy boot images to TFTP boot directory:
sudo mkdir /srv
cd workspace_base/build/build/tmp/deploy/dom0-image-thin-initramfs/images/generic-armv8-xt/
sudo cp uInitramfs /srv/
sudo cp Image /srv/
cd workspace_base/build/build/tmp/deploy/domd-image-weston/images/MACHINE_NAME-xt/
sudo cp dom0.dtb /srv/
sudo cp xenpolicy /srv/
sudo cp xen-uImage /srv/
extract DomD rootfs images:
sudo mkdir /srv/domd
cd workspace_base/build/build/tmp/deploy/domd-image-weston/images/MACHINE_NAME-xt/
sudo tar -xf core-image-weston-MACHINE_NAME-xt.tar.bz2 -C /srv/domd/
extract DomU rootfs images:
sudo mkdir /srv/domu
cd workspace_base/build/build/tmp/deploy/domu-image-weston/images/MACHINE_NAME-xt/
sudo tar -xf core-image-weston-MACHINE_NAME-xt.tar.bz2 -C /srv/domu/
It worth mentioning that some changes to domain config files are necessary to switch between NFS
and any of storage devices.
There are two options for doing that. Either by modifying source files followed by rebuilding
the whole Domain-0 or by modifying domain config files right in a built uInitramfs image which
implies unpacking an image before and packing it back after (an "uirfs.sh" script which
we will consider down the document is intended to help with that).
In order to modify sources go to the files location in inner yocto:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/meta-xt-prod-extra/recipes-extended/guest-addons/files
- Edit domd-MACHINE_NAME.cfg to uncomment "extra" option which corresponds to NFS.
- Edit domu-MACHINE_NAME.cfg to uncomment "extra" option which corresponds to NFS and comment "disk" option.
and then rebuild Domain-0 manually:
cd workspace_base/build/build/tmp/work/x86_64-xt-linux/dom0-image-thin-initramfs/1.0-r0/repo/
source poky/oe-init-build-env
bitbake core-image-thin-initramfs
So, as you can see, varying U-Boot's "boot_dev" and "bootcmd" environment variables
and domain config's "extra" and "disk" options it is possible to choose different boot device
for each system component. For example, boot system using TFTP and then use SD card for guest domains.
Additional script available in the product.
1. uirfs.sh.
This script is intended to pack/unpack uInitramfs for Domain-0. It might be helpful since uInitramfs
contains a lot of things which may changed during testing. The "xt" directory ships all guest domain
configs, device-tree and Kernel images, etc.
For example,
unpack uInitramfs:
cd /srv
sudo mkdir initramfs
sudo ./uirfs.sh unpack uInitramfs initramfs
Modify it's components if needed.
For example, domain config files located at:
/srv/initramfs/xt/dom.cfg/
pack it back:
sudo ./uirfs.sh pack uInitramfs initramfs
2. Product image structure (for system with DomA)
The product image contains the following partitions: dom0, domd, domu/doma.
DomA itself contains its partitions: system, vendor, misc, vbmeta, metadata, userdata.
You can inspect partitions using losetup and lsblk.
See example below for Ubuntu 18, assuming that the image was named 'prod-devel.img'.
Pay attention that text after ## is a comment to console output.
Sizes of partitions may vary.
```
$ sudo losetup --find --partscan --show ./prod-devel.img
/dev/loop23
$ lsblk /dev/loop23
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop23 7:23 0 7G 0 loop
├─loop23p1 259:0 0 256M 0 loop ## dom0
├─loop23p2 259:1 0 2G 0 loop ## domd
└─loop23p3 259:2 0 4.3G 0 loop ## doma in this case
$ sudo losetup --find --partscan --show /dev/loop23p3
/dev/loop24
$ lsblk /dev/loop24
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop24 7:20 0 7,5G 0 loop
├─loop20p1 259:3 0 30M 0 loop ## bootimage A
├─loop20p2 259:4 0 30M 0 loop ## bootimage B
├─loop20p3 259:5 0 1M 0 loop ## vbmeta A
├─loop20p4 259:6 0 1M 0 loop ## vbmeta B
├─loop20p5 259:7 0 1M 0 loop ## misc
├─loop20p6 259:8 0 11M 0 loop ## metadata
├─loop20p7 259:9 0 1M 0 loop ## rpmbemul
├─loop20p8 259:10 0 4,5G 0 loop ## super
└─loop20p9 259:11 0 3G 0 loop ## userdata
sudo losetup -d /dev/loop24
sudo losetup -d /dev/loop23
```