VEOS:How to Execute VE Program
Date:Jun-2023

This document describes the information regarding the VEOS version 3.1.1 or
later.

First of all, you have to login your target VH which means Linux/x86 machines
having VEs.

- How to check the number of VE nodes and cores

  $ unset VE_NODE_NUMBER
  $ /opt/nec/ve/bin/ve-uptime | grep Node
  VE Node: 1
  VE Node: 0
           ^this number means VE Node number

  In this case, you can use two VEs, #0 and #1.

  $ /opt/nec/ve/bin/ve-nproc
  VE Node: 1
  8
  VE Node: 0
  8

  In this case, you can use the 8 cores on each VE node.

  (NOTE) You can get the best VE performance when you execute any VE program
         whose number of threads/processes is less than or equal to the number
         of the VE CPU cores because there is no context switching while
         executing the VE program.


- How to check VEOS mode

  VEOS supports 2 operation modes, Normal mode and NUMA mode.
  You can check the mode by /opt/nec/ve/bin/venumainfo command. 

   $ /opt/nec/ve/bin/venumainfo

  In case of Normal mode, venumainfo displays "available: 1 nodes(0)" and
  it displays information of VE as "node 0". 

  In case of NUMA mode, venumainfo displays "available: 2 nodes(0-1)" and
  it displays information of "node 0" (NUMA node 0) and "node 1" (NUMA node 1).


- How to make a VE program

  $ vi hello.c
  $ /opt/nec/ve/bin/ncc hello.c -o hello


- How to run a VE program

  If you want to execute your program on VE Node #0:

   $ /opt/nec/ve/bin/ve_exec -N 0 ./hello

  If you want to execute your program on VE Node #1:

   $ /opt/nec/ve/bin/ve_exec -N 1 ./hello

  Or when you specify the environment variable for the VE Node number, it is
  not necessary to specify the -N option, such as:

   $ export VE_NODE_NUMBER=1
   $ /opt/nec/ve/bin/ve_exec ./hello

  When you install VEOS, binfmt for VE is configured. It is possible to execute
  VE programs without ve_exec. If multiple VE nodes exist, the VE node which
  executes VE program is specified by environment variable
  (VE_NODE_NUMBER), such as:

   $ export VE_NODE_NUMBER=1
   $ ./hello

  When you want to change the path which the linker should look in to while
  linking dynamic libraries/shared libraries, you can use VE_LD_LIBRARY_PATH
  which is the predefined environment variable in VEOS. For example, if you
  want to set it to "/path/to/user/lib":

   $ export VE_LD_LIBRARY_PATH=/path/to/usr/lib 
   $ /opt/nec/ve/bin/ve_exec ./hello

  Please see 've_exec options' section and 'Environment variables' section for
  more variables.


- How to run a VE program (NUMA mode)

  When VEOS is running as NUMA mode, you can use VE like NUMA. 

  You execute a program without any NUMA options, then VEOS creates a process
  for the program on a NUMA node whose load is lower. When the process requests
  memory allocation, memory belonging a local NUMA node on which the process is
  running (local memory) is allocated first. If the local memory is full,
  memory belonging an opposite NUMA node (remote memory) is allocated.

   $ /opt/nec/ve/bin/ve_exec ./hello
  
  You can use options to specify using NUMA node and memory policy.

   $ /opt/nec/ve/bin/ve_exec --cpunodebind=0 --localmembind ./hello

  You can also use an environment variable for NUMA, VE_NUMA_OPT, to specify
  the options.

   $ export VE_NUMA_OPT="--cpunodebind=0 --localmembind"
   $ /opt/nec/ve/bin/ve_exec ./hello

  (Note)
  By default, VEOS allocates local memory first, then it allocates remote
  memory if local memory is full. VEOS defines the policy as MPOL_DEFAULT.
  When '--localmembind' is specified, VEOS allocates local memory only. VEOS
  defines the policy as MPOL_BIND.

  Please see 've_exec options' section and 'Environment variables' 
  section for detail.


- How to add path to the default library search path

  Please put a configuration file to specify additional search path into
  "/etc/opt/nec/ve/ld.so.conf.d" directory and then run the following command.

   $ sudo /opt/nec/ve/sbin/ve-ldconfig

  The name of the configuration file should be end with ".conf" and the list of
  additional search path should be written in it.


- How to enable Accelerated I/O

  "Accelerated I/O" is a feature which improves I/O performance by efficient
  data transfer between VE and VH.

  The throughput and the latency of the below read/write family system calls
  will be improved.

      read     write
      pread    pwrite
      readv    writev
      preadv   pwritev

  A system administrator needs to reserve huge pages for Accelerated
  I/O through the kernel parameter "vm.nr_hugepages" following
  instructions in "SX-Aurora TSUBASA Installation Guide".

  Accelerated I/O is enabled by default.
   $ ./hello

  Please set environment variable VE_ACC_IO to 0 to disable Accelerated I/O.

   $ export VE_ACC_IO=0
   $ ./hello

  Please note the below information:
  
    * When Accelerated I/O is enabled, it uses 8MB hugepages per thread by 
      default.
    * Accelerated I/O transfers data every 4MB. So, read/write family
      system calls will not be atomic when the size is more than 4MB.

  Users can set the environment variable VE_ACC_IO_VERBOSE=1 to display
  whether accelerated I/O is enabled or disabled to standard error when
  a VE process exits.

   $ export VE_ACC_IO=0
   $ export VE_ACC_IO_VERBOSE=1
   $ ./a.out
     Accelerated IO is disabled

   $ export -n VE_ACC_IO
   $ export VE_ACC_IO_VERBOSE=1
   $ ./a.out
     Accelerated IO is enabled

- How to limit the resource of VE process
  Please set environment variable VE_LIMIT_OPT for resource limitation
  of VE process.

   $ export VE_LIMIT_OPT="-c 10240 --softm 1234 --hardm 2345 -t 10"

  The set of resources supported by VE_LIMIT_OPT environment variable
  are as follows.
    -v : virtual memory            RLIMIT_AS
    -c : core file size            RLIMIT_CORE
    -t : cpu time                  RLIMIT_CPU
    -d : data seg size             RLIMIT_DATA
    -m : max memory size           RLIMIT_RSS
    -i : pending signals           RLIMIT_SIGPENDING
    -s : stack size                RLIMIT_STACK

  With above listed short options, both the soft and hard limits
  are set for a VE process. User can also set the above resource's soft
  and hard limits individually through VE_LIMIT_OPT environment variable
  using below listed long options.

    --softv: RLIMIT_AS soft limit         --hardv: RLIMIT_AS hard limit
    --softc: RLIMIT_CORE soft limit       --hardc: RLIMIT_CORE hard limit
    --softt: RLIMIT_CPU soft limit        --hardt: RLIMIT_CPU hard limit
    --softd: RLIMIT_DATA soft limit       --hardd: RLIMIT_DATA hard limit
    --softm: RLIMIT_RSS soft limit        --hardm: RLIMIT_RSS hard limit
    --softi: RLIMIT_SIGPENDING soft limit --hardi: RLIMIT_SIGPENDING hard limit
    --softs: RLIMIT_STACK soft limit      --hards: RLIMIT_STACK hard limit

  If any of the above (either or both of hard and soft limits) is not provided
  in the environment variable, the limit is inherited from 'ulimit' values. 

  Other resource limits, which are not supported by VE_LIMIT_OPT,
  are inherited from VH ulimit. If stack size resource is defined in
  both VE_STACK_LIMIT and VE_LIMIT_OPT, the value specified in '-s' in
  VE_LIMIT_OPT takes precedence and is applied to VE process.

  These resource limits are applicable to VE processes only,
  and the same limits of corresponding pseudo process are
  inherited from VH ulimit.


- How to show the resource limitation of VE process
  When "--show-limit" is specified as command line argument
  to ve_exec then it will display applicable soft and hard limits
  of all the VE_LIMIT_OPT supported resources in respectively
  3rd and 4th column of its output.
  
   $ /opt/nec/ve/bin/ve_exec --show-limit	
    core file size     (blocks, -c)    10240	    10240
    data seg size      (kbytes, -d)    unlimited    unlimited
    pending signals    (-i)            79349        79349
    max memory size    (kbytes, -m)    1234         2345
    stack size         (kbytes, -s)    unlimited    unlimited
    cpu time           (seconds, -t)   10           10
    virtual memory     (kbytes, -v)    unlimited    unlimited

- How to change time-slice and timer-interval

  VEOS supports displaying and updating the time-slice and timer-interval of
  the VEOS scheduler. User can display and update dynamically the time-slice
  and timer-interval of the VEOS scheduler by using /opt/nec/ve/bin/veosctl
  command.

  The usage of /opt/nec/ve/bin/veosctl command is as follows.

   $ /opt/nec/ve/bin/veosctl [-s|--show] [-h|--help] [-V|--version]
   $ /opt/nec/ve/bin/veosctl [-t|--timer-interval <value>] [-T|--time-slice <value>]

  /opt/nec/ve/bin/veosctl command accepts the following options.

   -T value, --time-slice=value       Update the VEOS scheduler's time-slice to
                                      'value' in milliseconds. It needs
                                      privileged permission.

   -t value, --timer-interval=value   Update the VEOS scheduler's timer-
                                      interval to 'value' in milliseconds. It
                                      needs privileged permission.

   -s, --show                         Display the timer-interval and time-slice
                                      of VEOS scheduler in milliseconds unit.

   -V, --version                      Display version information and exit.

   -h, --help                         Display this help and exit.

  /opt/nec/ve/bin/veosctl command can update time-slice and timer-interval of
  VEOS scheduler immediately without restarting VEOS. After restarting VEOS,
  the changes by /opt/nec/ve/bin/veosctl command will be lost. To update
  default time-slice and timer-interval of VEOS scheduler, please specify the
  options in /etc/opt/nec/ve/veos/ve-os-launcher.d/veos_timer.options and then
  restart VEOS.
  Updating /etc/opt/nec/ve/veos/ve-os-launcher.d/veos_timer.options affects
  to all VEOS. How to update each VEOS scheduler's default time-slice and
  timer-interval is as follows.
   1. Create /etc/opt/nec/ve/veos/ve-os-launcher.d/<N> directory.
      <N> is VE node number.
   2. Copy /etc/opt/nec/ve/veos/ve-os-launcher.d/veos_timer.options to
      /etc/opt/nec/ve/veos/ve-os-launcher.d/<N> directory.
   3. Update /etc/opt/nec/ve/veos/ve-os-launcher.d/<N>/veos_timer.options.
   4. Restart VEOS corresponding to VE node of <N>.

  The change of time-slice and timer-interval may have an impact on performance
  of VE programs. Hence please change time-slice and timer-interval values
  carefully.

- ve_exec options

  ve_exec command accepts the following options.

    -V, --version                 output version information and exit
    -h, --help                    display this help and exit
    -N node, --node=<node>        where node is the VE Node number
                                  on which VE program belongs
    -c core, --core=<core>        where core is the VE core number on
                                  which VE program to be executed
                                  Can't specify both '-c' and '--cpunodebind'
    --                            End of options (Requires if binary name
                                    starts with '-')
    --show-limit                  display the applicable soft and hard
                                  resource limits of VE process supported by
                                  VE_LIMIT_OPT environment variable
  
  NUMA mode only:
    --cpunodebind=<NUMA node ID>  Specify NUMA node ID on which VE program
                                  to be executed
    --localmembind                Only local memory can be allocated.
                                  (MPOL_BIND)
    

- Environment Variables

  These environment variables can control VE program execution.

  * VE_NODE_NUMBER
    It specifies VE node number on which a program will be executed. 
    When you execute a VE program without VE_NODE_NUMBER and -N option of
    ve_exec command, the VE program will be executed on VE Node #0.

  * VE_LD_LIBRARY_PATH
    This environment variable provides a library path for finding dynamic 
    libraries in colon separated format.

  * VE_LD_PRELOAD
    This environment variable sets the pre-loading path for dynamic linker 
    and that allows to load you specified, shared library before all 
    other shared libraries which are linked to an executable get loaded.

  * VE_LINKER_VERSION
    This environment variable specify a dynamic linker for VE10 or VE30. If 
    user sets it to "VE3" on VE30, VE ELF loader uses ld.so for VE30.
    If the variable is not set, VE ELF loader loads a dynamic linker in 
    accordance with INTERP in program header, i.e. a VE30 program uses a 
    dynamic linker of VE30 Glibc and a VE10 program uses that of VE10 Glibc.

  * VE_ACC_IO
    This environment variable is set to 1 by default.
    When this environment variable is set to 1 or a number greater than 1, 
    Accelerated I/O is enabled and use 8 MB HugePages memory. Accelerated I/O
    is enabled by default. When this environment variable is set to 0, 
    Accelerated I/O is disabled. Refer to "How to enable Accelerated I/O" 
    in this document for more detail.
    
  * VE_ATOMIC_IO
    When this environment variable is set to 1, atomic I/O is enabled. When VE
    program invokes one of read/write family system calls and send/recv family
    system calls, a buffer is allocated at VH side. If atomic I/O is enabled,
    this buffer's size will the request size up to 2GB.

    Enabling atomic I/O has a impact on the below system calls.

      read     write
      pread    pwrite
      readv    writev
      preadv   pwritev
      send     recv
      sendto   recvfrom

    If atomic I/O is not enabled, the buffer size is fixed to 64MB.  If the
    requested size is more than 64MB, data will be transferred every 64MB. So,
    read/write family system calls and send/recv family system calls will not
    be atomic in this case.

  * VE_NUMA_OPT
    It specifies NUMA options, "--cpunodebind" and "--localmembind".  If NUMA
    options specified by both of VE_NUMA_OPT and a command line argument are
    specified, VEOS uses a value specified by a command line argument.

  * VE_LIMIT_OPT
    This environment variable is used to set the resource limits, both hard and 
    soft, of a VE process. Please reference "How to limit the resource
    of VE process" for detail.

  * VE_CORE_LIMIT
    This environment variable specifies a set of VE cores while executing a 
    VE program.
    E.g. VE_CORE_LIMIT=0-1,3,5-7 (available VE core is 0,1,3,5,6 or 7)

  * VE_SIGPROCMASK
    This environment variable is used to set signal mask on start of VE
    program. The value of this environment variable is a bit mask which
    represents signals to be blocked. The signal number can be specified
    from 1(SIGHUP) to 31(SIGSYS) in decimal or hexadecimal.
    
    If you specify 12(SIGUSR2) in decimal:
     $ export VE_SIGPROCMASK=4096

    If you specify 12(SIGUSR2) in hexadecimal:
     $ export VE_SIGPROCMASK=0x1000
    
    If you specify multiple signal numbers, they must be the sum of the set
    values for each signal.
    For example, if you want to specify 10(SIGUSR1) and 12(SIGUSR2),
    specify the following.

    If you specify 10(SIGUSR1) and 12(SIGUSR2) in decimal:
     $ export VE_SIGPROCMASK=5120

    If you specify 10(SIGUSR1) and 12(SIGUSR2) in hexadecimal:
     $ export VE_SIGPROCMASK=0x1400


- How to run commands for VEOS

  If you want to execute your target command, such as ps, on VE Node #0:

   $ export VE_NODE_NUMBER=0
   $ /opt/nec/ve/bin/ve-ps

  If you want to execute your target command, such as ps, on VE Node #1:

   $ export VE_NODE_NUMBER=1
   $ /opt/nec/ve/bin/ve-ps

  When user executes some commands through NQSV job scheduler, these commands
  perform for VE nodes which the job scheduler allocates. Please refer to
  VEOS document 'Difference Points for Commands' for detail.


  Following is list of commands which perform for VE nodes which the job
  scheduler allocates: 
  
   ve-free
   ve-iostat
   ve-ipcrm
   ve-ipcs
   ve-lastcomm
   ve-lscpu
   ve-lslocks
   ve-mpstat
   ve-nproc
   ve-pidstat
   ve-pmap
   ve-prtstat
   ve-ps
   ve-sadf
   ve-sar
   ve-uptime
   veda-smi
   venumainfo
   veosctl
   ve-vmstat
   ve-w   

  VEOS supports the following commands which stored in /opt/nec/ve/bin:

   ve-aclocal
   ve-aclocal-1.16
   ve-autoconf
   ve-autoheader
   ve-autom4te
   ve-automake
   ve-automake-1.16
   ve-autoreconf
   ve-autoscan
   ve-autoupdate
   ve-free
   ve-gdb
   ve-ifnames
   ve-iostat
   ve-ipcs
   ve-ipcrm
   ve-lastcomm
   ve-ldd
   ve-libtool
   ve-libtoolize
   ve-lscpu
   ve-lslocks
   ve-mpstat
   ve-pidstat
   ve-pmap
   ve-prlimit
   ve-prtstat
   ve-ps
   ve-sadf
   ve-sar
   ve-strace
   ve-strace-log-merge
   ve-taskset
   ve-time
   ve-tload
   ve-top
   ve-uptime
   ve-libc-check
   ve_exec
   ve_validate_binary
   venumainfo
   veosctl
   veswap
   ve-vmstat
   ve-w

  VEOS supports the following commands which stored in /opt/nec/ve/sbin:
   
   ve-accton
   ve-convert-acct
   ve-dump-acct
   ve-ldconfig
   ve-sa
   ve-set-hugepages
  

- How to debug VE program
 
  You can use gdb:

   $ export VE_NODE_NUMBER=0
   $ /opt/nec/ve/bin/ve-gdb ./hello
   (gdb) run


- Validating executables or shared libraries

  More than or equal to 1024 bytes gap is required between the text section and
  the data section of an executable or a shared library to load it and
  execute its functions. If the size of the gap is less than 1024 bytes,
  loading it fails.

  The below command checks whether the size of the gap are more than or equal
  to 1024 bytes.

    $ /opt/nec/ve/bin/ve_validate_binary

  Without options, the command search executables and shared libraries
  in the current directory. The following options are available.

    -f,	--file       Specify the file to validate
    -d, --directory  Specify the directory to search executables and
                     shared libraries

  If the sizes of the gaps of all executables and shared libraries are
  more then or equal to 1024 bytes, the command prints the below message.

    ***ALL VE BINARY/SHARED LIBRARIES ARE VALIDATED SUCCESSFULLY!!***

  If the size of the gap of a executable or a shared library is less
  than 1024 bytes, the command prints the below message.

    DATA/TEXT gap is less

  If the above message is printed, please re-link the executable or
  the shared library.


- Checking the C library used for a binary

  There is no interoperability between binaries that is compiled with glibc and
  that is compiled with musl-libc. If you need to distinguish which C library
  is linked for a binary, you can use "ve-libc-check" script as follows. The
  script supports any kind of VE binaries, such as "a.out", ".o", ".a", ".so".

    $ /opt/nec/ve/bin/ve-libc-check ./a.out
    This is compiled with musl-libc: /home/userxxx/a.out

  In the case above, it is indicated that your "a.out" is compiled with
  musl-libc. If no message is printed, the binary does not require musl-libc,
  that is compiled with glibc.

  (Note)
  * musl-libc is obsoleted at the end of March 2019. SX-Aurora TSUBASA software
    doesn' t support musl-libc anymore.
  * "ve-libc-check" does not support an object files that is created from ".s"
    files. Please be very careful not to mix binaries compiled with musl-libc
    and binaries compiled with glibc when you have ".s" source code.
  * "ve-libc-check" does not support checking a library dynamically linked
    with a program, i.e. if a program compiled and linked with glibc loads or
    links a library compiled and linked with musl-libc dynamically,
    "ve-libc-check" cannot check it. Please do not forget to re-make all of
    your libraries with glibc.


- Automatic HugePages configuration tool

  SX-Aurora TSUBASA components use HugePages. Automatic HugePages configuration
  tool sets the number of required HugePags automatically. Please see 
  "SX-Aurora TSUBASA Installation Guide" regarding the required HugePages. 
  This section describes the usage of the automatic HugePages configuration 
  tool.

  The privileged user can use this tool to set the number of HugePages and 
  overcommit HugePages. 

  Usage: /opt/nec/ve/sbin/ve-set-hugepages [-f filepath] [-a pages] [-o pages] 
         [-m] [-s] [-vh]

         -a      Add <pages> to the calculated HugePages.
         -f      Specify a path to a configuration file
                 When user specifies a file, the default configuration file is
                 not loaded.
         -m      Specify a mode for NQSV socket scheduling with membind policy.
         -o      Set <pages> to the overcommit HugePages.
         -s      Show the current values.
         -v      Verbose mode 
         -h      Show this help message. 

  * Configuration file

    This tool loads options from a configuration file. The default path is
    /etc/opt/nec/ve/veos/ve-hugepages.conf, otherwise user can specify a 
    configuration file using '-f' option. Please see '-f' option in above.
    User can use the following options.

    SKIP_SETTING 	If SKIP_SETTING is set 'YES', the automatic HugePages
			configuration is disabled.
    
    MEMBIND		If this node is binded to a NQSV queue with 'Socket
			scheduling' with 'membind' policy, set 'YES' to MEMBIND.
    
    ADDITIONAL_HUGE_PAGES=<the number of HugePages>
			HugePages will be set to the summary of the number of 
			a calculated HugePages and the value of 
			ADDITIONAL_HUGE_PAGES.
    
    OVERCOMMIT_HUGE_PAGES=<the number of HugePages>
			The maximum number of surplus (overcommit) HugePages 
			will be set to the value if it is valid.

    The ve-set-hugepages.service which starts during VH bootstrap loads the
    default configuration file.

  * NOTE

    Please note that you don't specify the options for HugePages and overcommit
    HugePages in /etc/sysctl.conf, /etc/sysctl.d, etc. because systemd-sysctl and
    this command work exclusively regarding HugePages.

    If HugePages (nr_hugepages) option and overcommit Hugepages
    (nr_overcommit_hugepages) option exist in the configuration file of
    systemd-sysctl.service (sysctl commtnd), the values of sysctl.service may be
    set. Please refer to the Linux manpage sysctl.d(5) regarding configuration
    files of systemd-sysctl.service.

- Conversion of process accounting data version

  Supported data structure version of process accounting are different for VEOS
  version. User needs to use the data file including appropriate data structure 
  for the installed VEOS when user use ve-lastcomm and ve-dump-acct command.

  * Before and including VEOS v2.14.1	version 14
  * After VEOS v3.0.2			VE10/VE20: version 15, VE30: version 16

  When user updates VEOS v2.x to v3.0.2, existing data file is converted to 
  version 15 during psacct-ve installation. User can convert manually the 
  version of the data file using /opt/nec/ve/sbin/ve-convert-acct command.
  
  Usage: /opt/nec/ve/sbin/ve-convert-acct -t <version> <files>

	-t 	  Specify the version of converted data structure
	<files>	  Specify target files


- Settings for running executable binaries for VE10 on VE30 machine

  When executing the VE10 binary on VE30, the data used internally is saved 
  in temporary files. Temporary files are stored in /var/opt/nec/ve/veos by 
  default and are deleted when the process terminates or the shared library 
  is unloaded.
  These temporary files can be reused under certain conditions. If you run 
  the same binary repeatedly, you may be able to reduce the overhead of 
  starting the process by reusing the temporary files.
  
  * Setting the number of temporary files saved
    To change the number of temporary files saved when executing binaries 
    for VE10, edit /etc/opt/nec/ve/veos/ve-os-launcher.d/vemodcode.options .
    If you want to set a unique value for each VE, create 
    /opt/nec/ve/veos/ve-os-launcher.d/N directory where N is VE node number, 
    then create vemodcode.options under the directory.
    After setting vemodcode.options, restart VEOS by running 
    "systemctl restart 've-os-launcher@*'" to enable setting.
    
      - Setting the number of temporary files stored per VE
        By default, this is 0, and temporary files are deleted immediately 
        after use. VEOS automatically deletes more than the specified number 
        of temporary files.
        
        ve-os-launcher@*=--ve-modcode-file-max=<NUMBER OF FILES>
        
      - Setting the number of temporary files stored per user
        By setting a per-user limit, it prevents that too many temporary 
        files of a single user are stored.
        
        (Note) If the specified value is greater than --ve-modcode-file-max, 
               or if the total number of temporary files for all users exceeds 
               the --ve-modcode-file-max value, the --ve-modcode-file-max 
               setting takes precedence, and temporary files are deleted even 
               if the number is less than the setting.
  
  * Change the directory to store temporary files
    The size of the temporary file increases in proportion to the size of 
    the VE10 binary to be executed.
    If the directory to storing temporary files (by default, 
    /var/opt/nec/ve/veos) does not have enough space, follow the steps 
    below to change the directory for storing temporary files with more 
    space.
    
    1. Stop VEOS with the following command.
       systemctl stop 've-os-launcher@*'
    2. Delete the directory for storing temporary files you were using until 
       then, as it will no longer be needed. However, if you used the default 
       directory /var/opt/nec/ve/ve/veos, you do not need to delete it.
    3. Create a new directory to sotre temporary files. The permissions of 
       the directory should be 0755.
    4. Set the new directory in /etc/opt/nec/ve/veos/vemodcode.conf as follows.
       TMP_FILE_DIR=<PATH of temporary file>
    5. Start VEOS with the following command.
       systemctl start 've-os-launcher@*'
    
    (Note) If you specify a directory that does not exist in vemodcode.conf, 
           VEOS will not be able to boot. Also, if you change the permissions 
           of a file or directory created by VEOS under the directory for 
           storing temporary files, an error will occur in the check and 
           the process generation will fail.


- When you face VEOS problems

  Please provide the following information in order to analyze the problem.

    1. Version of RPM packages and the kernel
       Please execute the following command to create list of versions
       and provide it.
  
        $ rpm -qa --qf '%{VENDOR} %{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n' \
           | grep NEC > nec-version.txt
        $ uname -r > knl-version.txt
   
    2. Log files
       Please provide the following files.
  
        /var/log/messages*
        /var/opt/nec/ve/veos/*.log.*
        /var/opt/nec/ve/log/sa/saDD_* (DD is the day when issue occurs)
        /var/opt/nec/ve/veos/core.* (if exists)
        /var/lib/systemd/coredump/core.veos.* (if exists)
        /var/lib/systemd/coredump/core.ived.* (if exists)
        /var/lib/systemd/coredump/core.vemmd.* (if exists)

    3. Process accouting file
       If process accouting is enabled, please provide the process
       accounting files of the day when the issue occurs and the next
       day.
       The below is the paths of process accouting file.

        /var/opt/nec/ve/account/pacct_?
            The latest file.
        /var/opt/nec/ve/account/pacct_?-YYYYMMDD
            The file rotated on the day indicated by YYYYMMDD.
        /var/opt/nec/ve/account/pacct_?-YYYYMMDD.gz
            The compressed file rotated on the day indicated by YYYYMMDD.

       Please note the process information might be stored into the
       file of the next day because the process accounting file is
       rotated around 3 o'clock.
  
  If your facing problem is like freeze of VE process, please provide
  the following information, too. "gdb" package is required to get the
  information.

    4. Stack traces of VEOS and ve_exec (pseudo process)
       Please execute the following command as root to get stack
       traces of VEOS and ve_exec. Please provide generated pstack-*
       files.
  
        # for i in `seq 3`; do \
          pgrep veos | while read pid; do pstack $pid > pstack-veos.$pid.$i; done; \
          pgrep ve_exec | while read pid; do pstack $pid > pstack-ve_exec.$pid.$i; done; \
          sleep 10; done


- How to gather debug log of VEOS
  If your facing problem is reproducible, the following debug logs 
  are really helpful to analyze the problem.

  Please edit the configuration file setting log level to "DEBUG" and
  layout to "ve_debug" for all components as follows. 

   $ sudo cp /etc/opt/nec/ve/veos/log4crc /etc/opt/nec/ve/veos/log4crc.org
   $ sudo sed -i -e 's/INFO/DEBUG/g' -e 's/CRIT/DEBUG/g' \
      -e 's/layout="ve"/layout="ve_debug"/g' /etc/opt/nec/ve/veos/log4crc

  And, please reboot VEOS. 

   $ sudo systemctl restart 've-os-launcher@*'

  Please set the following environment variable to specify the directory
  and enable log output. 

   $ export LOG4C_RCPATH=/etc/opt/nec/ve/veos

  Then, please reproduce the issue again and you'll find log files of
  ve_exec at the current directory. 

   ./ve_exec.log.*

  Please gather log files for veos at the following path as default. 

   /var/log/messages*
   /var/opt/nec/ve/veos/*.log.*
   /var/opt/nec/ve/veos/core.* (if exists)
   /var/lib/systemd/coredump/core.veos.* (if exists)
   /var/lib/systemd/coredump/core.ived.* (if exists)
   /var/lib/systemd/coredump/core.vemmd.* (if exists)

  Last of all, please restore the original log level and reboot VEOS.

   $ sudo cp /etc/opt/nec/ve/veos/log4crc.org /etc/opt/nec/ve/veos/log4crc
   $ sudo systemctl restart 've-os-launcher@*'
   $ unset LOG4C_RCPATH