-$1.tar.gz%, dversionmangle=s/~/+/" \
- https://github.com/openjdk/jdk17u/tags \
- (?:.*?/)?jdk-(\d[\d.]*\+\d[\d]*)\.tar\.gz debian uupdate
+opts=\
+repack,\
+compression=xz,\
+ https://github.com/openjdk/jdk17u/tags \
+ (?:.*?/)?jdk-(\d[\d.]*\+\d[\d]*)\.tar\.gz
diff -Nru openjdk-17-17.0.6+10/doc/building.html openjdk-17-17.0.7+7/doc/building.html
--- openjdk-17-17.0.6+10/doc/building.html 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/doc/building.html 2023-04-12 20:11:58.000000000 +0000
@@ -239,26 +239,26 @@
-Linux |
-gcc, clang |
+Linux |
+gcc, clang |
-macOS |
-Apple Xcode (using clang) |
+macOS |
+Apple Xcode (using clang) |
-AIX |
-IBM XL C/C++ |
+AIX |
+IBM XL C/C++ |
-Windows |
-Microsoft Visual Studio |
+Windows |
+Microsoft Visual Studio |
@@ -266,22 +266,22 @@
-Linux |
-gcc 10.2.0 |
+Linux |
+gcc 10.2.0 |
-macOS |
-Apple Xcode 10.1 (using clang 10.0.0) |
+macOS |
+Apple Xcode 10.1 (using clang 10.0.0) |
-Windows |
-Microsoft Visual Studio 2022 update 17.1.0 |
+Windows |
+Microsoft Visual Studio 2022 update 17.1.0 |
@@ -295,12 +295,16 @@
To use clang instead of gcc on Linux, use --with-toolchain-type=clang
.
Apple Xcode
The oldest supported version of Xcode is 8.
-You will need the Xcode command lines developers tools to be able to build the JDK. (Actually, only the command lines tools are needed, not the IDE.) The simplest way to install these is to run:
+You will need the Xcode command line developer tools to be able to build the JDK. (Actually, only the command line tools are needed, not the IDE.) The simplest way to install these is to run:
xcode-select --install
-It is advisable to keep an older version of Xcode for building the JDK when updating Xcode. This blog page has good suggestions on managing multiple Xcode versions. To use a specific version of Xcode, use xcode-select -s
before running configure
, or use --with-toolchain-path
to point to the version of Xcode to use, e.g. configure --with-toolchain-path=/Applications/Xcode8.app/Contents/Developer/usr/bin
+When updating Xcode, it is advisable to keep an older version for building the JDK. To use a specific version of Xcode you have multiple options:
+
+- Use
xcode-select -s
before running configure
, e.g. xcode-select -s /Applications/Xcode13.1.app
. The drawback is that the setting is system wide and you may have to revert it after an OpenJDK build.
+- Use configure option
--with-xcode-path
, e.g. configure --with-xcode-path=/Applications/Xcode13.1.app
This allows using a specific Xcode version for an OpenJDK build, independently of the active Xcode version by xcode-select
.
+
If you have recently (inadvertently) updated your OS and/or Xcode version, and the JDK can no longer be built, please see the section on Problems with the Build Environment, and Getting Help to find out if there are any recent, non-merged patches available for this update.
Microsoft Visual Studio
-For aarch64 machines running Windows the minimum accepted version is Visual Studio 2019 (16.8 or higher). For all other platforms the minimum accepted version of Visual Studio is 2017. Older versions will not be accepted by configure
and will not work. For all platforms the maximum accepted version of Visual Studio is 2019.
+The minimum accepted version of Visual Studio is 2017. Older versions will not be accepted by configure
and will not work. The maximum accepted version of Visual Studio is 2019.
If you have multiple versions of Visual Studio installed, configure
will by default pick the latest. You can request a specific version to be used by setting --with-toolchain-version
, e.g. --with-toolchain-version=2017
.
If you have Visual Studio installed but configure
fails to detect it, it may be because of spaces in path.
IBM XL C/C++
@@ -509,7 +513,7 @@
Running Tests
Most of the JDK tests are using the JTReg test framework. Make sure that your configuration knows where to find your installation of JTReg. If this is not picked up automatically, use the --with-jtreg=<path to jtreg home>
option to point to the JTReg framework. Note that this option should point to the JTReg home, i.e. the top directory, containing lib/jtreg.jar
etc.
-The Adoption Group provides recent builds of jtreg here. Download the latest .tar.gz
file, unpack it, and point --with-jtreg
to the jtreg
directory that you just unpacked.
+The Adoption Group provides recent builds of jtreg here. Download the latest .tar.gz
file, unpack it, and point --with-jtreg
to the jtreg
directory that you just unpacked.
Building of Hotspot Gtest suite requires the source code of Google Test framework. The top directory, which contains both googletest
and googlemock
directories, should be specified via --with-gtest
. The supported version of Google Test is 1.8.1, whose source code can be obtained:
- by downloading and unpacking the source bundle from here
diff -Nru openjdk-17-17.0.6+10/doc/building.md openjdk-17-17.0.7+7/doc/building.md
--- openjdk-17-17.0.6+10/doc/building.md 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/doc/building.md 2023-04-12 20:11:58.000000000 +0000
@@ -305,12 +305,12 @@
system should be independent factors, but in practice there's more or less a
one-to-one correlation between target operating system and toolchain.
- Operating system Supported toolchain
- ------------------ -------------------------
- Linux gcc, clang
- macOS Apple Xcode (using clang)
- AIX IBM XL C/C++
- Windows Microsoft Visual Studio
+| Operating system | Supported toolchain |
+| ------------------ | ------------------------- |
+| Linux | gcc, clang |
+| macOS | Apple Xcode (using clang) |
+| AIX | IBM XL C/C++ |
+| Windows | Microsoft Visual Studio |
Please see the individual sections on the toolchains for version
recommendations. As a reference, these versions of the toolchains are used, at
@@ -319,11 +319,11 @@
you stay to this list, the more likely you are to compile successfully without
issues.
- Operating system Toolchain version
- ------------------ -------------------------------------------------------
- Linux gcc 10.2.0
- macOS Apple Xcode 10.1 (using clang 10.0.0)
- Windows Microsoft Visual Studio 2022 update 17.1.0
+| Operating system | Toolchain version |
+| ------------------ | ------------------------------------------ |
+| Linux | gcc 10.2.0 |
+| macOS | Apple Xcode 10.1 (using clang 10.0.0) |
+| Windows | Microsoft Visual Studio 2022 update 17.1.0 |
All compilers are expected to be able to compile to the C99 language standard,
as some C99 features are used in the source code. Microsoft Visual Studio
@@ -351,20 +351,20 @@
The oldest supported version of Xcode is 8.
-You will need the Xcode command lines developers tools to be able to build
-the JDK. (Actually, *only* the command lines tools are needed, not the IDE.)
+You will need the Xcode command line developer tools to be able to build
+the JDK. (Actually, *only* the command line tools are needed, not the IDE.)
The simplest way to install these is to run:
```
xcode-select --install
```
-It is advisable to keep an older version of Xcode for building the JDK when
-updating Xcode. This [blog page](
-http://iosdevelopertips.com/xcode/install-multiple-versions-of-xcode.html) has
-good suggestions on managing multiple Xcode versions. To use a specific version
-of Xcode, use `xcode-select -s` before running `configure`, or use
-`--with-toolchain-path` to point to the version of Xcode to use, e.g.
-`configure --with-toolchain-path=/Applications/Xcode8.app/Contents/Developer/usr/bin`
+When updating Xcode, it is advisable to keep an older version for building the JDK.
+To use a specific version of Xcode you have multiple options:
+
+ * Use `xcode-select -s` before running `configure`, e.g. `xcode-select -s /Applications/Xcode13.1.app`. The drawback is that the setting
+ is system wide and you may have to revert it after an OpenJDK build.
+ * Use configure option `--with-xcode-path`, e.g. `configure --with-xcode-path=/Applications/Xcode13.1.app`
+ This allows using a specific Xcode version for an OpenJDK build, independently of the active Xcode version by `xcode-select`.
If you have recently (inadvertently) updated your OS and/or Xcode version, and
the JDK can no longer be built, please see the section on [Problems with the
@@ -848,7 +848,7 @@
The [Adoption Group](https://wiki.openjdk.java.net/display/Adoption) provides
recent builds of jtreg [here](
-https://ci.adoptopenjdk.net/view/Dependencies/job/dependency_pipeline/lastSuccessfulBuild/artifact/jtreg/).
+https://ci.adoptium.net/view/Dependencies/job/dependency_pipeline/lastSuccessfulBuild/artifact/jtreg/).
Download the latest `.tar.gz` file, unpack it, and point `--with-jtreg` to the
`jtreg` directory that you just unpacked.
diff -Nru openjdk-17-17.0.6+10/make/ZipSource.gmk openjdk-17-17.0.7+7/make/ZipSource.gmk
--- openjdk-17-17.0.6+10/make/ZipSource.gmk 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/ZipSource.gmk 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2014, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -31,6 +31,7 @@
include Modules.gmk
SRC_ZIP_WORK_DIR := $(SUPPORT_OUTPUTDIR)/src
+$(if $(filter $(TOPDIR)/%, $(SUPPORT_OUTPUTDIR)), $(eval SRC_ZIP_BASE := $(TOPDIR)), $(eval SRC_ZIP_BASE := $(SUPPORT_OUTPUTDIR)))
# Hook to include the corresponding custom file, if present.
$(eval $(call IncludeCustomExtension, ZipSource.gmk))
@@ -45,10 +46,10 @@
# again to create src.zip.
$(foreach m, $(ALL_MODULES), \
$(foreach d, $(call FindModuleSrcDirs, $m), \
- $(eval $d_TARGET := $(SRC_ZIP_WORK_DIR)/$(patsubst $(TOPDIR)/%,%,$d)/$m) \
+ $(eval $d_TARGET := $(SRC_ZIP_WORK_DIR)/$(patsubst $(TOPDIR)/%,%,$(patsubst $(SUPPORT_OUTPUTDIR)/%,%,$d))/$m) \
$(if $(SRC_GENERATED), , \
$(eval $$($d_TARGET): $d ; \
- $$(if $(filter $(TOPDIR)/%, $d), $$(link-file-relative), $$(link-file-absolute)) \
+ $$(if $(filter $(SRC_ZIP_BASE)/%, $d), $$(link-file-relative), $$(link-file-absolute)) \
) \
) \
$(eval SRC_ZIP_SRCS += $$($d_TARGET)) \
diff -Nru openjdk-17-17.0.6+10/make/autoconf/basic.m4 openjdk-17-17.0.7+7/make/autoconf/basic.m4
--- openjdk-17-17.0.6+10/make/autoconf/basic.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/basic.m4 2023-04-12 20:11:58.000000000 +0000
@@ -212,6 +212,18 @@
[UTIL_PREPEND_TO_PATH([TOOLCHAIN_PATH],$with_toolchain_path)]
)
+ AC_ARG_WITH([xcode-path], [AS_HELP_STRING([--with-xcode-path],
+ [set up toolchain on Mac OS using a path to an Xcode installation])])
+
+ if test "x$with_xcode_path" != x; then
+ if test "x$OPENJDK_BUILD_OS" = "xmacosx"; then
+ UTIL_PREPEND_TO_PATH([TOOLCHAIN_PATH],
+ $with_xcode_path/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:$with_xcode_path/Contents/Developer/usr/bin)
+ else
+ AC_MSG_WARN([Option --with-xcode-path is only valid on Mac OS, ignoring.])
+ fi
+ fi
+
AC_ARG_WITH([extra-path], [AS_HELP_STRING([--with-extra-path],
[prepend these directories to the default path])],
[UTIL_PREPEND_TO_PATH([EXTRA_PATH],$with_extra_path)]
@@ -222,7 +234,7 @@
# If not, detect if Xcode is installed by running xcodebuild -version
# if no Xcode installed, xcodebuild exits with 1
# if Xcode is installed, even if xcode-select is misconfigured, then it exits with 0
- if test "x$DEVKIT_ROOT" != x || /usr/bin/xcodebuild -version >/dev/null 2>&1; then
+ if test "x$DEVKIT_ROOT" != x || test "x$TOOLCHAIN_PATH" != x || /usr/bin/xcodebuild -version >/dev/null 2>&1; then
# We need to use xcodebuild in the toolchain dir provided by the user
UTIL_LOOKUP_PROGS(XCODEBUILD, xcodebuild, $TOOLCHAIN_PATH)
if test x$XCODEBUILD = x; then
diff -Nru openjdk-17-17.0.6+10/make/autoconf/basic_tools.m4 openjdk-17-17.0.7+7/make/autoconf/basic_tools.m4
--- openjdk-17-17.0.6+10/make/autoconf/basic_tools.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/basic_tools.m4 2023-04-12 20:11:58.000000000 +0000
@@ -161,7 +161,7 @@
[
# Check if make supports the output sync option and if so, setup using it.
UTIL_ARG_WITH(NAME: output-sync, TYPE: literal,
- VALID_VALUES: [none recurse line target], DEFAULT: recurse,
+ VALID_VALUES: [none recurse line target], DEFAULT: none,
OPTIONAL: true, ENABLED_DEFAULT: true,
ENABLED_RESULT: OUTPUT_SYNC_SUPPORTED,
CHECKING_MSG: [for make --output-sync value],
diff -Nru openjdk-17-17.0.6+10/make/autoconf/build-aux/config.guess openjdk-17-17.0.7+7/make/autoconf/build-aux/config.guess
--- openjdk-17-17.0.6+10/make/autoconf/build-aux/config.guess 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/build-aux/config.guess 2023-04-12 20:11:58.000000000 +0000
@@ -29,7 +29,40 @@
# and fix the broken property, if needed.
DIR=`dirname $0`
-OUT=`. $DIR/autoconf-config.guess`
+OUT=`. $DIR/autoconf-config.guess 2> /dev/null`
+
+# Handle some cases that autoconf-config.guess is not capable of
+if [ "x$OUT" = x ]; then
+ if [ `uname -s` = Linux ]; then
+ # Test and fix little endian MIPS.
+ if [ `uname -m` = mipsel ]; then
+ OUT=mipsel-unknown-linux-gnu
+ elif [ `uname -m` = mips64el ]; then
+ OUT=mips64el-unknown-linux-gnu
+ # Test and fix little endian PowerPC64.
+ elif [ `uname -m` = ppc64le ]; then
+ OUT=powerpc64le-unknown-linux-gnu
+ # Test and fix LoongArch64.
+ elif [ `uname -m` = loongarch64 ]; then
+ OUT=loongarch64-unknown-linux-gnu
+ # Test and fix RISC-V.
+ elif [ `uname -m` = riscv64 ]; then
+ OUT=riscv64-unknown-linux-gnu
+ fi
+ # Test and fix cygwin machine arch .x86_64
+ elif [[ `uname -s` = CYGWIN* ]]; then
+ if [ `uname -m` = ".x86_64" ]; then
+ OUT=x86_64-unknown-cygwin
+ fi
+ fi
+
+ if [ "x$OUT" = x ]; then
+ # Run autoconf-config.guess again to get the error message.
+ . $DIR/autoconf-config.guess > /dev/null
+ else
+ printf "guessed by custom config.guess... " >&2
+ fi
+fi
# Detect C library.
# Use '-gnu' suffix on systems that use glibc.
@@ -81,36 +114,6 @@
OUT=powerpc$KERNEL_BITMODE`echo $OUT | sed -e 's/[^-]*//'`
fi
-# Test and fix little endian PowerPC64.
-# TODO: should be handled by autoconf-config.guess.
-if [ "x$OUT" = x ]; then
- if [ `uname -m` = ppc64le ]; then
- if [ `uname -s` = Linux ]; then
- OUT=powerpc64le-unknown-linux-gnu
- fi
- fi
-fi
-
-# Test and fix little endian MIPS.
-if [ "x$OUT" = x ]; then
- if [ `uname -s` = Linux ]; then
- if [ `uname -m` = mipsel ]; then
- OUT=mipsel-unknown-linux-gnu
- elif [ `uname -m` = mips64el ]; then
- OUT=mips64el-unknown-linux-gnu
- fi
- fi
-fi
-
-# Test and fix LoongArch64.
-if [ "x$OUT" = x ]; then
- if [ `uname -s` = Linux ]; then
- if [ `uname -m` = loongarch64 ]; then
- OUT=loongarch64-unknown-linux-gnu
- fi
- fi
-fi
-
# Test and fix cpu on macos-aarch64, uname -p reports arm, buildsys expects aarch64
echo $OUT | grep arm-apple-darwin > /dev/null 2> /dev/null
if test $? != 0; then
diff -Nru openjdk-17-17.0.6+10/make/autoconf/flags-cflags.m4 openjdk-17-17.0.7+7/make/autoconf/flags-cflags.m4
--- openjdk-17-17.0.6+10/make/autoconf/flags-cflags.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/flags-cflags.m4 2023-04-12 20:11:58.000000000 +0000
@@ -641,7 +641,7 @@
STATIC_LIBS_CFLAGS="-DSTATIC_BUILD=1"
if test "x$TOOLCHAIN_TYPE" = xgcc || test "x$TOOLCHAIN_TYPE" = xclang; then
STATIC_LIBS_CFLAGS="$STATIC_LIBS_CFLAGS -ffunction-sections -fdata-sections \
- -DJNIEXPORT='__attribute__((visibility(\"hidden\")))'"
+ -DJNIEXPORT='__attribute__((visibility(\"default\")))'"
else
STATIC_LIBS_CFLAGS="$STATIC_LIBS_CFLAGS -DJNIEXPORT="
fi
diff -Nru openjdk-17-17.0.6+10/make/autoconf/flags.m4 openjdk-17-17.0.7+7/make/autoconf/flags.m4
--- openjdk-17-17.0.6+10/make/autoconf/flags.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/flags.m4 2023-04-12 20:11:58.000000000 +0000
@@ -489,14 +489,14 @@
UTIL_DEFUN_NAMED([FLAGS_COMPILER_CHECK_ARGUMENTS],
[*ARGUMENT IF_TRUE IF_FALSE PREFIX], [$@],
[
- FLAGS_C_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [ARG_ARGUMENT],
+ FLAGS_C_COMPILER_CHECK_ARGUMENTS(ARGUMENT: ARG_ARGUMENT,
IF_TRUE: [C_COMP_SUPPORTS="yes"],
IF_FALSE: [C_COMP_SUPPORTS="no"],
- PREFIX: [ARG_PREFIX])
- FLAGS_CXX_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [ARG_ARGUMENT],
+ PREFIX: ARG_PREFIX)
+ FLAGS_CXX_COMPILER_CHECK_ARGUMENTS(ARGUMENT: ARG_ARGUMENT,
IF_TRUE: [CXX_COMP_SUPPORTS="yes"],
IF_FALSE: [CXX_COMP_SUPPORTS="no"],
- PREFIX: [ARG_PREFIX])
+ PREFIX: ARG_PREFIX)
AC_MSG_CHECKING([if both ARG_PREFIX[CC] and ARG_PREFIX[CXX] support "ARG_ARGUMENT"])
supports=no
diff -Nru openjdk-17-17.0.6+10/make/autoconf/jdk-options.m4 openjdk-17-17.0.7+7/make/autoconf/jdk-options.m4
--- openjdk-17-17.0.6+10/make/autoconf/jdk-options.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/jdk-options.m4 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -748,7 +748,7 @@
$RM "$CODESIGN_TESTFILE"
$TOUCH "$CODESIGN_TESTFILE"
CODESIGN_SUCCESS=false
- $CODESIGN $PARAMS "$CODESIGN_TESTFILE" 2>&AS_MESSAGE_LOG_FD \
+ eval \"$CODESIGN\" $PARAMS \"$CODESIGN_TESTFILE\" 2>&AS_MESSAGE_LOG_FD \
>&AS_MESSAGE_LOG_FD && CODESIGN_SUCCESS=true
$RM "$CODESIGN_TESTFILE"
AC_MSG_CHECKING([$MESSAGE])
@@ -761,7 +761,7 @@
AC_DEFUN([JDKOPT_CHECK_CODESIGN_HARDENED],
[
- JDKOPT_CHECK_CODESIGN_PARAMS([-s "$MACOSX_CODESIGN_IDENTITY" --option runtime],
+ JDKOPT_CHECK_CODESIGN_PARAMS([-s \"$MACOSX_CODESIGN_IDENTITY\" --option runtime],
[if codesign with hardened runtime is possible])
])
diff -Nru openjdk-17-17.0.6+10/make/autoconf/lib-x11.m4 openjdk-17-17.0.7+7/make/autoconf/lib-x11.m4
--- openjdk-17-17.0.6+10/make/autoconf/lib-x11.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/lib-x11.m4 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -96,24 +96,29 @@
OLD_CFLAGS="$CFLAGS"
CFLAGS="$CFLAGS $SYSROOT_CFLAGS $X_CFLAGS"
- HEADERS_TO_CHECK="X11/extensions/shape.h X11/extensions/Xrender.h X11/extensions/XTest.h X11/Intrinsic.h"
- # There is no Xrandr extension on AIX
if test "x$OPENJDK_TARGET_OS" = xaix; then
+ # There is no Xrandr extension on AIX. Code is duplicated to avoid autoconf
+ # 2.71+ warning "AC_CHECK_HEADERS: you should use literals"
X_CFLAGS="$X_CFLAGS -DNO_XRANDR"
+ AC_CHECK_HEADERS([X11/extensions/shape.h X11/extensions/Xrender.h X11/extensions/XTest.h X11/Intrinsic.h],
+ [X11_HEADERS_OK=yes],
+ [X11_HEADERS_OK=no; break],
+ [
+ # include
+ # include
+ ]
+ )
else
- HEADERS_TO_CHECK="$HEADERS_TO_CHECK X11/extensions/Xrandr.h"
+ AC_CHECK_HEADERS([X11/extensions/shape.h X11/extensions/Xrender.h X11/extensions/XTest.h X11/Intrinsic.h X11/extensions/Xrandr.h],
+ [X11_HEADERS_OK=yes],
+ [X11_HEADERS_OK=no; break],
+ [
+ # include
+ # include
+ ]
+ )
fi
- # Need to include Xlib.h and Xutil.h to avoid "present but cannot be compiled" warnings on Solaris 10
- AC_CHECK_HEADERS([$HEADERS_TO_CHECK],
- [X11_HEADERS_OK=yes],
- [X11_HEADERS_OK=no; break],
- [
- # include
- # include
- ]
- )
-
if test "x$X11_HEADERS_OK" = xno; then
HELP_MSG_MISSING_DEPENDENCY([x11])
AC_MSG_ERROR([Could not find all X11 headers (shape.h Xrender.h Xrandr.h XTest.h Intrinsic.h). $HELP_MSG])
diff -Nru openjdk-17-17.0.6+10/make/autoconf/util.m4 openjdk-17-17.0.7+7/make/autoconf/util.m4
--- openjdk-17-17.0.6+10/make/autoconf/util.m4 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/autoconf/util.m4 2023-04-12 20:11:58.000000000 +0000
@@ -52,7 +52,7 @@
AC_DEFUN([UTIL_DEFUN_NAMED],
[
AC_DEFUN($1, [
- m4_foreach(arg, m4_split(m4_normalize($2)), [
+ m4_foreach([arg], m4_split(m4_normalize($2)), [
m4_if(m4_bregexp(arg, [^\*]), -1,
[
m4_set_add(legal_named_args, arg)
@@ -64,13 +64,18 @@
)
])
- m4_foreach([arg], [$3], [
- m4_if(m4_bregexp(arg, [: ]), -1, m4_define([arg], m4_bpatsubst(arg, [:], [: ])))
- m4_define(arg_name, m4_substr(arg, 0, m4_bregexp(arg, [: ])))
+ # Delicate quoting and unquoting sequence to ensure the actual value is passed along unchanged
+ # For details on how this works, see https://git.openjdk.org/jdk/pull/11458#discussion_r1038173051
+ # WARNING: Proceed at the risk of your own sanity, getting this to work has made me completely
+ # incapable of feeling love or any other positive emotion
+ # ~Julian
+ m4_foreach([arg], m4_dquote(m4_dquote_elt($3)), [
+ m4_if(m4_index(arg, [: ]), -1, [m4_define([arg], m4_dquote(m4_bpatsubst(m4_dquote(arg), [:], [: ])))])
+ m4_define(arg_name, m4_substr(arg, 0, m4_index(arg, [: ])))
m4_set_contains(legal_named_args, arg_name, [],[AC_MSG_ERROR([Internal error: m4_if(arg_name, , arg, arg_name) is not a valid named argument to [$1]. Valid arguments are 'm4_set_contents(defined_args, [ ]) m4_set_contents(legal_named_args, [ ])'.])])
m4_set_remove(required_named_args, arg_name)
m4_set_remove(legal_named_args, arg_name)
- m4_pushdef([ARG_][]arg_name, m4_bpatsubst(m4_substr(arg, m4_incr(m4_incr(m4_bregexp(arg, [: ])))), [^\s*], []))
+ m4_pushdef([ARG_][]arg_name, m4_bpatsubst(m4_bpatsubst(m4_dquote(m4_dquote(arg)), arg_name[: ]), [^\s*]))
m4_set_add(defined_args, arg_name)
m4_undefine([arg_name])
])
@@ -373,18 +378,18 @@
m4_define(ARG_GIVEN, m4_translit(ARG_NAME, [a-z-], [A-Z_])[_GIVEN])
# If DESC is not specified, set it to a generic description.
- m4_define([ARG_DESC], m4_if(ARG_DESC, , [Enable the ARG_NAME feature], m4_normalize(ARG_DESC)))
+ m4_define([ARG_DESC], m4_if(m4_quote(ARG_DESC), , [[Enable the ARG_NAME feature]], [m4_normalize(ARG_DESC)]))
# If CHECKING_MSG is not specified, set it to a generic description.
- m4_define([ARG_CHECKING_MSG], m4_if(ARG_CHECKING_MSG, , [for --enable-ARG_NAME], m4_normalize(ARG_CHECKING_MSG)))
+ m4_define([ARG_CHECKING_MSG], m4_if(m4_quote(ARG_CHECKING_MSG), , [[for --enable-ARG_NAME]], [m4_normalize(ARG_CHECKING_MSG)]))
# If the code blocks are not given, set them to the empty statements to avoid
# tripping up bash.
- m4_define([ARG_CHECK_AVAILABLE], m4_if(ARG_CHECK_AVAILABLE, , :, ARG_CHECK_AVAILABLE))
- m4_define([ARG_IF_GIVEN], m4_if(ARG_IF_GIVEN, , :, ARG_IF_GIVEN))
- m4_define([ARG_IF_NOT_GIVEN], m4_if(ARG_IF_NOT_GIVEN, , :, ARG_IF_NOT_GIVEN))
- m4_define([ARG_IF_ENABLED], m4_if(ARG_IF_ENABLED, , :, ARG_IF_ENABLED))
- m4_define([ARG_IF_DISABLED], m4_if(ARG_IF_DISABLED, , :, ARG_IF_DISABLED))
+ m4_if(ARG_CHECK_AVAILABLE, , [m4_define([ARG_CHECK_AVAILABLE], [:])])
+ m4_if(ARG_IF_GIVEN, , [m4_define([ARG_IF_GIVEN], [:])])
+ m4_if(ARG_IF_NOT_GIVEN, , [m4_define([ARG_IF_NOT_GIVEN], [:])])
+ m4_if(ARG_IF_ENABLED, , [m4_define([ARG_IF_ENABLED], [:])])
+ m4_if(ARG_IF_DISABLED, , [m4_define([ARG_IF_DISABLED], [:])])
##########################
# Part 2: Set up autoconf shell code
@@ -647,21 +652,21 @@
m4_define(ARG_GIVEN, m4_translit(ARG_NAME, [a-z-], [A-Z_])[_GIVEN])
# If DESC is not specified, set it to a generic description.
- m4_define([ARG_DESC], m4_if(ARG_DESC, , [Give a value for the ARG_NAME feature], m4_normalize(ARG_DESC)))
+ m4_define([ARG_DESC], m4_if(m4_quote(ARG_DESC), , [[Give a value for the ARG_NAME feature]], [m4_normalize(ARG_DESC)]))
# If CHECKING_MSG is not specified, set it to a generic description.
- m4_define([ARG_CHECKING_MSG], m4_if(ARG_CHECKING_MSG, , [for --with-ARG_NAME], m4_normalize(ARG_CHECKING_MSG)))
+ m4_define([ARG_CHECKING_MSG], m4_if(m4_quote(ARG_CHECKING_MSG), , [[for --with-ARG_NAME]], [m4_normalize(ARG_CHECKING_MSG)]))
m4_define([ARG_HAS_AUTO_BLOCK], m4_if(ARG_IF_AUTO, , false, true))
# If the code blocks are not given, set them to the empty statements to avoid
# tripping up bash.
- m4_define([ARG_CHECK_AVAILABLE], m4_if(ARG_CHECK_AVAILABLE, , :, ARG_CHECK_AVAILABLE))
- m4_define([ARG_CHECK_VALUE], m4_if(ARG_CHECK_VALUE, , :, ARG_CHECK_VALUE))
- m4_define([ARG_CHECK_FOR_FILES], m4_if(ARG_CHECK_FOR_FILES, , :, ARG_CHECK_FOR_FILES))
- m4_define([ARG_IF_AUTO], m4_if(ARG_IF_AUTO, , :, ARG_IF_AUTO))
- m4_define([ARG_IF_GIVEN], m4_if(ARG_IF_GIVEN, , :, ARG_IF_GIVEN))
- m4_define([ARG_IF_NOT_GIVEN], m4_if(ARG_IF_NOT_GIVEN, , :, ARG_IF_NOT_GIVEN))
+ m4_if(ARG_CHECK_AVAILABLE, , [m4_define([ARG_CHECK_AVAILABLE], [:])])
+ m4_if(ARG_CHECK_VALUE, , [m4_define([ARG_CHECK_VALUE], [:])])
+ m4_if(ARG_CHECK_FOR_FILES, , [m4_define([ARG_CHECK_FOR_FILES], [:])])
+ m4_if(ARG_IF_AUTO, , [m4_define([ARG_IF_AUTO], [:])])
+ m4_if(ARG_IF_GIVEN, , [m4_define([ARG_IF_GIVEN], [:])])
+ m4_if(ARG_IF_NOT_GIVEN, , [m4_define([ARG_IF_NOT_GIVEN], [:])])
##########################
# Part 2: Set up autoconf shell code
@@ -699,7 +704,6 @@
ARG_CHECK_AVAILABLE
# Check if the option should be turned on
- echo check msg:ARG_CHECKING_MSG:
AC_MSG_CHECKING(ARG_CHECKING_MSG)
if test x$AVAILABLE = xfalse; then
diff -Nru openjdk-17-17.0.6+10/make/common/MakeBase.gmk openjdk-17-17.0.7+7/make/common/MakeBase.gmk
--- openjdk-17-17.0.6+10/make/common/MakeBase.gmk 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/common/MakeBase.gmk 2023-04-12 20:11:58.000000000 +0000
@@ -307,17 +307,36 @@
# There are two versions, either creating a relative or an absolute link. Be
# careful when using this on Windows since the symlink created is only valid in
# the unix emulation environment.
-define link-file-relative
+# In msys2 we use mklink /J because its ln would perform a deep copy of the target.
+# This inhibits performance and can lead to issues with long paths. With mklink /J
+# relative linking does not work, so we handle the link as absolute path.
+ifeq ($(OPENJDK_BUILD_OS_ENV), windows.msys2)
+ define link-file-relative
+ $(call MakeTargetDir)
+ $(RM) '$(call DecodeSpace, $@)'
+ cmd //c "mklink /J $(call FixPath, $(call DecodeSpace, $@)) $(call FixPath, $(call DecodeSpace, $<))"
+ endef
+else
+ define link-file-relative
$(call MakeTargetDir)
$(RM) '$(call DecodeSpace, $@)'
$(LN) -s '$(call DecodeSpace, $(call RelativePath, $<, $(@D)))' '$(call DecodeSpace, $@)'
-endef
+ endef
+endif
-define link-file-absolute
+ifeq ($(OPENJDK_BUILD_OS_ENV), windows.msys2)
+ define link-file-absolute
+ $(call MakeTargetDir)
+ $(RM) '$(call DecodeSpace, $@)'
+ cmd //c "mklink /J $(call FixPath, $(call DecodeSpace, $@)) $(call FixPath, $(call DecodeSpace, $<))"
+ endef
+else
+ define link-file-absolute
$(call MakeTargetDir)
$(RM) '$(call DecodeSpace, $@)'
$(LN) -s '$(call DecodeSpace, $<)' '$(call DecodeSpace, $@)'
-endef
+ endef
+endif
################################################################################
diff -Nru openjdk-17-17.0.6+10/make/conf/github-actions.conf openjdk-17-17.0.7+7/make/conf/github-actions.conf
--- openjdk-17-17.0.6+10/make/conf/github-actions.conf 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/conf/github-actions.conf 2023-04-12 20:11:58.000000000 +0000
@@ -26,16 +26,16 @@
# Versions and download locations for dependencies used by GitHub Actions (GHA)
GTEST_VERSION=1.8.1
-JTREG_VERSION=6.1+2
+JTREG_VERSION=6.1+3
LINUX_X64_BOOT_JDK_EXT=tar.gz
-LINUX_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.2%2B8/OpenJDK17U-jdk_x64_linux_hotspot_17.0.2_8.tar.gz
-LINUX_X64_BOOT_JDK_SHA256=288f34e3ba8a4838605636485d0365ce23e57d5f2f68997ac4c2e4c01967cd48
+LINUX_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.6%2B10/OpenJDK17U-jdk_x64_linux_hotspot_17.0.6_10.tar.gz
+LINUX_X64_BOOT_JDK_SHA256=a0b1b9dd809d51a438f5fa08918f9aca7b2135721097f0858cf29f77a35d4289
WINDOWS_X64_BOOT_JDK_EXT=zip
-WINDOWS_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.2%2B8/OpenJDK17U-jdk_x64_windows_hotspot_17.0.2_8.zip
-WINDOWS_X64_BOOT_JDK_SHA256=d083479ca927dce2f586f779373d895e8bf668c632505740279390384edf03fa
+WINDOWS_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.6%2B10/OpenJDK17U-jdk_x64_windows_hotspot_17.0.6_10.zip
+WINDOWS_X64_BOOT_JDK_SHA256=d544c4f00d414a1484c0a5c1758544f30f308c4df33f9a28bd4a404215d0d444
MACOS_X64_BOOT_JDK_EXT=tar.gz
-MACOS_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.2%2B8/OpenJDK17U-jdk_x64_mac_hotspot_17.0.2_8.tar.gz
-MACOS_X64_BOOT_JDK_SHA256=3630e21a571b7180876bf08f85d0aac0bdbb3267b2ae9bd242f4933b21f9be32
+MACOS_X64_BOOT_JDK_URL=https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.6%2B10/OpenJDK17U-jdk_x64_mac_hotspot_17.0.6_10.tar.gz
+MACOS_X64_BOOT_JDK_SHA256=faa2927584cf2bd0a35d2ac727b9f22725e23b2b24abfb3b2ac7140f4d65fbb4
diff -Nru openjdk-17-17.0.6+10/make/conf/jib-profiles.js openjdk-17-17.0.7+7/make/conf/jib-profiles.js
--- openjdk-17-17.0.6+10/make/conf/jib-profiles.js 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/conf/jib-profiles.js 2023-04-12 20:11:58.000000000 +0000
@@ -67,6 +67,7 @@
* input.build_osenv
* input.build_osenv_cpu
* input.build_osenv_platform
+ * input.build_osenv_version
*
* For more complex nested attributes, there is a method "get":
*
@@ -1098,9 +1099,23 @@
environment_path: common.boot_jdk_home + "/bin"
}
- var makeBinDir = (input.build_os == "windows"
- ? input.get("gnumake", "install_path") + "/cygwin/bin"
- : input.get("gnumake", "install_path") + "/bin");
+ var makeRevision = "4.0+1.0";
+ var makeBinSubDir = "/bin";
+ var makeModule = "gnumake-" + input.build_platform;
+ if (input.build_os == "windows") {
+ makeModule = "gnumake-" + input.build_osenv_platform;
+ if (input.build_osenv == "cygwin") {
+ var versionArray = input.build_osenv_version.split(/\./);
+ var majorVer = parseInt(versionArray[0]);
+ var minorVer = parseInt(versionArray[1]);
+ if (majorVer > 3 || (majorVer == 3 && minorVer >= 3)) {
+ makeRevision = "4.3+1.0";
+ } else {
+ makeBinSubDir = "/cygwin/bin";
+ }
+ }
+ }
+ var makeBinDir = input.get("gnumake", "install_path") + makeBinSubDir;
var dependencies = {
boot_jdk: boot_jdk,
@@ -1172,18 +1187,12 @@
gnumake: {
organization: common.organization,
ext: "tar.gz",
- revision: "4.0+1.0",
-
- module: (input.build_os == "windows"
- ? "gnumake-" + input.build_osenv_platform
- : "gnumake-" + input.build_platform),
-
+ revision: makeRevision,
+ module: makeModule,
configure_args: "MAKE=" + makeBinDir + "/make",
-
environment: {
"MAKE": makeBinDir + "/make"
},
-
environment_path: makeBinDir
},
diff -Nru openjdk-17-17.0.6+10/make/conf/version-numbers.conf openjdk-17-17.0.7+7/make/conf/version-numbers.conf
--- openjdk-17-17.0.6+10/make/conf/version-numbers.conf 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/conf/version-numbers.conf 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -28,12 +28,12 @@
DEFAULT_VERSION_FEATURE=17
DEFAULT_VERSION_INTERIM=0
-DEFAULT_VERSION_UPDATE=6
+DEFAULT_VERSION_UPDATE=7
DEFAULT_VERSION_PATCH=0
DEFAULT_VERSION_EXTRA1=0
DEFAULT_VERSION_EXTRA2=0
DEFAULT_VERSION_EXTRA3=0
-DEFAULT_VERSION_DATE=2023-01-17
+DEFAULT_VERSION_DATE=2023-04-18
DEFAULT_VERSION_CLASSFILE_MAJOR=61 # "`$EXPR $DEFAULT_VERSION_FEATURE + 44`"
DEFAULT_VERSION_CLASSFILE_MINOR=0
DEFAULT_VERSION_DOCS_API_SINCE=11
diff -Nru openjdk-17-17.0.6+10/make/data/cacerts/certignaca openjdk-17-17.0.7+7/make/data/cacerts/certignaca
--- openjdk-17-17.0.6+10/make/data/cacerts/certignaca 1970-01-01 00:00:00.000000000 +0000
+++ openjdk-17-17.0.7+7/make/data/cacerts/certignaca 2023-04-12 20:11:58.000000000 +0000
@@ -0,0 +1,29 @@
+Owner: CN=Certigna, O=Dhimyotis, C=FR
+Issuer: CN=Certigna, O=Dhimyotis, C=FR
+Serial number: fedce3010fc948ff
+Valid from: Fri Jun 29 15:13:05 GMT 2007 until: Tue Jun 29 15:13:05 GMT 2027
+Signature algorithm name: SHA1withRSA
+Subject Public Key Algorithm: 2048-bit RSA key
+Version: 3
+-----BEGIN CERTIFICATE-----
+MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV
+BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X
+DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ
+BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3
+DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4
+QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny
+gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw
+zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q
+130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2
+JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw
+DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw
+ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT
+AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj
+AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG
+9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h
+bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc
+fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu
+HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w
+t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw
+WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg==
+-----END CERTIFICATE-----
diff -Nru openjdk-17-17.0.6+10/make/data/lsrdata/language-subtag-registry.txt openjdk-17-17.0.7+7/make/data/lsrdata/language-subtag-registry.txt
--- openjdk-17-17.0.6+10/make/data/lsrdata/language-subtag-registry.txt 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/data/lsrdata/language-subtag-registry.txt 2023-04-12 20:11:58.000000000 +0000
@@ -1,4 +1,4 @@
-File-Date: 2021-05-11
+File-Date: 2022-08-08
%%
Type: language
Subtag: aa
@@ -2146,9 +2146,16 @@
Macrolanguage: ar
%%
Type: language
+Subtag: ajs
+Description: Algerian Jewish Sign Language
+Added: 2022-02-25
+%%
+Type: language
Subtag: ajt
Description: Judeo-Tunisian Arabic
Added: 2009-07-29
+Deprecated: 2022-02-25
+Preferred-Value: aeb
Macrolanguage: jrb
%%
Type: language
@@ -5772,6 +5779,11 @@
Deprecated: 2020-03-28
%%
Type: language
+Subtag: bpc
+Description: Mbuk
+Added: 2022-02-25
+%%
+Type: language
Subtag: bpd
Description: Banda-Banda
Added: 2009-07-29
@@ -6016,6 +6028,7 @@
%%
Type: language
Subtag: brb
+Description: Brao
Description: Lave
Added: 2009-07-29
%%
@@ -8155,6 +8168,11 @@
Macrolanguage: zh
%%
Type: language
+Subtag: cnq
+Description: Chung
+Added: 2022-02-25
+%%
+Type: language
Subtag: cnr
Description: Montenegrin
Added: 2018-01-23
@@ -8757,6 +8775,8 @@
Description: Chungmboko
Description: Cung
Added: 2009-07-29
+Deprecated: 2022-02-25
+Comments: see bpc, cnq
%%
Type: language
Subtag: cuh
@@ -10176,6 +10196,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: dsz
+Description: Mardin Sign Language
+Added: 2022-02-25
+%%
+Type: language
Subtag: dta
Description: Daur
Added: 2009-07-29
@@ -10602,6 +10627,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: egm
+Description: Benamanga
+Added: 2022-02-25
+%%
+Type: language
Subtag: ego
Description: Eggon
Added: 2009-07-29
@@ -10913,7 +10943,7 @@
%%
Type: language
Subtag: env
-Description: Enwan (Edu State)
+Description: Enwan (Edo State)
Added: 2009-07-29
%%
Type: language
@@ -11329,6 +11359,7 @@
Type: language
Subtag: fit
Description: Tornedalen Finnish
+Description: Meänkieli
Added: 2009-07-29
%%
Type: language
@@ -12838,6 +12869,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: gov
+Description: Goo
+Added: 2022-02-25
+%%
+Type: language
Subtag: gow
Description: Gorowa
Added: 2009-07-29
@@ -14941,6 +14977,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: imt
+Description: Imotong
+Added: 2022-02-25
+%%
+Type: language
Subtag: imy
Description: Milyan
Added: 2009-07-29
@@ -19458,6 +19499,8 @@
Subtag: lak
Description: Laka (Nigeria)
Added: 2009-07-29
+Deprecated: 2022-02-25
+Preferred-Value: ksp
%%
Type: language
Subtag: lal
@@ -19953,6 +19996,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: lgo
+Description: Lango (South Sudan)
+Added: 2022-02-25
+%%
+Type: language
Subtag: lgq
Description: Logba
Added: 2009-07-29
@@ -20552,6 +20600,8 @@
Subtag: lno
Description: Lango (South Sudan)
Added: 2009-07-29
+Deprecated: 2022-02-25
+Comments: see imt, lgo, lqr, oie
%%
Type: language
Subtag: lns
@@ -20724,6 +20774,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: lqr
+Description: Logir
+Added: 2022-02-25
+%%
+Type: language
Subtag: lra
Description: Rara Bakati'
Added: 2009-07-29
@@ -20809,6 +20864,12 @@
Added: 2021-02-20
%%
Type: language
+Subtag: lsc
+Description: Albarradas Sign Language
+Description: Lengua de señas Albarradas
+Added: 2022-02-25
+%%
+Type: language
Subtag: lsd
Description: Lishana Deni
Added: 2009-07-29
@@ -20883,6 +20944,13 @@
Added: 2019-04-16
%%
Type: language
+Subtag: lsw
+Description: Seychelles Sign Language
+Description: Lalang Siny Seselwa
+Description: Langue des Signes Seychelloise
+Added: 2022-02-25
+%%
+Type: language
Subtag: lsy
Description: Mauritian Sign Language
Added: 2010-03-11
@@ -26779,6 +26847,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: nww
+Description: Ndwewe
+Added: 2022-02-25
+%%
+Type: language
Subtag: nwx
Description: Middle Newar
Added: 2009-07-29
@@ -27201,6 +27274,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: oie
+Description: Okolie
+Added: 2022-02-25
+%%
+Type: language
Subtag: oin
Description: Inebu One
Added: 2009-07-29
@@ -28472,6 +28550,11 @@
Scope: collection
%%
Type: language
+Subtag: phj
+Description: Pahari
+Added: 2022-02-25
+%%
+Type: language
Subtag: phk
Description: Phake
Added: 2009-07-29
@@ -28572,6 +28655,7 @@
Subtag: pii
Description: Pini
Added: 2009-07-29
+Deprecated: 2022-02-25
%%
Type: language
Subtag: pij
@@ -29419,6 +29503,7 @@
%%
Type: language
Subtag: psc
+Description: Iranian Sign Language
Description: Persian Sign Language
Added: 2009-07-29
%%
@@ -29772,7 +29857,13 @@
Added: 2009-07-29
%%
Type: language
+Subtag: pzh
+Description: Pazeh
+Added: 2022-02-25
+%%
+Type: language
Subtag: pzn
+Description: Jejara Naga
Description: Para Naga
Added: 2009-07-29
%%
@@ -30394,6 +30485,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: rib
+Description: Bribri Sign Language
+Added: 2022-02-25
+%%
+Type: language
Subtag: rie
Description: Rien
Added: 2009-07-29
@@ -30627,6 +30723,11 @@
Deprecated: 2016-05-30
%%
Type: language
+Subtag: rnb
+Description: Brunca Sign Language
+Added: 2022-02-25
+%%
+Type: language
Subtag: rnd
Description: Ruund
Added: 2009-07-29
@@ -30770,6 +30871,12 @@
Deprecated: 2017-02-23
%%
Type: language
+Subtag: rsk
+Description: Ruthenian
+Description: Rusyn
+Added: 2022-02-25
+%%
+Type: language
Subtag: rsl
Description: Russian Sign Language
Added: 2009-07-29
@@ -30780,6 +30887,11 @@
Added: 2016-05-30
%%
Type: language
+Subtag: rsn
+Description: Rwandan Sign Language
+Added: 2022-02-25
+%%
+Type: language
Subtag: rtc
Description: Rungtu Chin
Added: 2012-08-12
@@ -32276,6 +32388,8 @@
Subtag: smd
Description: Sama
Added: 2009-07-29
+Deprecated: 2022-02-25
+Preferred-Value: kmb
%%
Type: language
Subtag: smf
@@ -32382,6 +32496,8 @@
Subtag: snb
Description: Sebuyau
Added: 2009-07-29
+Deprecated: 2022-02-25
+Preferred-Value: iba
%%
Type: language
Subtag: snc
@@ -35199,6 +35315,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: tok
+Description: Toki Pona
+Added: 2022-02-25
+%%
+Type: language
Subtag: tol
Description: Tolowa
Added: 2009-07-29
@@ -35541,6 +35662,8 @@
%%
Type: language
Subtag: trv
+Description: Sediq
+Description: Seediq
Description: Taroko
Added: 2009-07-29
%%
@@ -36432,6 +36555,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: ugh
+Description: Kubachi
+Added: 2022-02-25
+%%
+Type: language
Subtag: ugn
Description: Ugandan Sign Language
Added: 2009-07-29
@@ -36742,6 +36870,11 @@
Preferred-Value: ema
%%
Type: language
+Subtag: uon
+Description: Kulon
+Added: 2022-02-25
+%%
+Type: language
Subtag: upi
Description: Umeda
Added: 2009-07-29
@@ -36944,6 +37077,8 @@
Subtag: uun
Description: Kulon-Pazeh
Added: 2009-07-29
+Deprecated: 2022-02-25
+Comments: see pzh, uon
%%
Type: language
Subtag: uur
@@ -37714,6 +37849,11 @@
Added: 2013-09-10
%%
Type: language
+Subtag: wdt
+Description: Wendat
+Added: 2022-02-25
+%%
+Type: language
Subtag: wdu
Description: Wadjigu
Added: 2009-07-29
@@ -38348,6 +38488,7 @@
Subtag: wrd
Description: Warduji
Added: 2009-07-29
+Deprecated: 2022-02-25
%%
Type: language
Subtag: wrg
@@ -38613,6 +38754,8 @@
Subtag: wya
Description: Wyandot
Added: 2009-07-29
+Deprecated: 2022-02-25
+Comments: see wdt, wyn
%%
Type: language
Subtag: wyb
@@ -38630,6 +38773,11 @@
Added: 2009-07-29
%%
Type: language
+Subtag: wyn
+Description: Wyandot
+Added: 2022-02-25
+%%
+Type: language
Subtag: wyr
Description: Wayoró
Added: 2009-07-29
@@ -38936,6 +39084,11 @@
Added: 2017-02-23
%%
Type: language
+Subtag: xdq
+Description: Kaitag
+Added: 2022-02-25
+%%
+Type: language
Subtag: xdy
Description: Malayic Dayak
Added: 2009-07-29
@@ -39079,6 +39232,11 @@
Macrolanguage: lah
%%
Type: language
+Subtag: xhm
+Description: Middle Khmer (1400 to 1850 CE)
+Added: 2022-02-25
+%%
+Type: language
Subtag: xhr
Description: Hernican
Added: 2009-07-29
@@ -39215,6 +39373,7 @@
%%
Type: language
Subtag: xkk
+Description: Kachok
Description: Kaco'
Added: 2009-07-29
%%
@@ -39469,6 +39628,7 @@
%%
Type: language
Subtag: xmx
+Description: Salawati
Description: Maden
Added: 2009-07-29
%%
@@ -41728,6 +41888,12 @@
Macrolanguage: zap
%%
Type: language
+Subtag: zcd
+Description: Las Delicias Zapotec
+Added: 2022-02-25
+Macrolanguage: zap
+%%
+Type: language
Subtag: zch
Description: Central Hongshuihe Zhuang
Added: 2009-07-29
@@ -42700,6 +42866,13 @@
Macrolanguage: ar
%%
Type: extlang
+Subtag: ajs
+Description: Algerian Jewish Sign Language
+Added: 2022-02-25
+Preferred-Value: ajs
+Prefix: sgn
+%%
+Type: extlang
Subtag: apc
Description: North Levantine Arabic
Added: 2009-07-29
@@ -43104,6 +43277,13 @@
Prefix: sgn
%%
Type: extlang
+Subtag: dsz
+Description: Mardin Sign Language
+Added: 2022-02-25
+Preferred-Value: dsz
+Prefix: sgn
+%%
+Type: extlang
Subtag: dup
Description: Duano
Added: 2009-07-29
@@ -43538,6 +43718,14 @@
Prefix: sgn
%%
Type: extlang
+Subtag: lsc
+Description: Albarradas Sign Language
+Description: Lengua de señas Albarradas
+Added: 2022-02-25
+Preferred-Value: lsc
+Prefix: sgn
+%%
+Type: extlang
Subtag: lsg
Description: Lyons Sign Language
Added: 2009-07-29
@@ -43589,6 +43777,15 @@
Prefix: sgn
%%
Type: extlang
+Subtag: lsw
+Description: Seychelles Sign Language
+Description: Lalang Siny Seselwa
+Description: Langue des Signes Seychelloise
+Added: 2022-02-25
+Preferred-Value: lsw
+Prefix: sgn
+%%
+Type: extlang
Subtag: lsy
Description: Mauritian Sign Language
Added: 2010-03-11
@@ -43880,6 +44077,7 @@
%%
Type: extlang
Subtag: psc
+Description: Iranian Sign Language
Description: Persian Sign Language
Added: 2009-07-29
Preferred-Value: psc
@@ -43944,6 +44142,13 @@
Prefix: sgn
%%
Type: extlang
+Subtag: rib
+Description: Bribri Sign Language
+Added: 2022-02-25
+Preferred-Value: rib
+Prefix: sgn
+%%
+Type: extlang
Subtag: rms
Description: Romanian Sign Language
Added: 2009-07-29
@@ -43951,6 +44156,13 @@
Prefix: sgn
%%
Type: extlang
+Subtag: rnb
+Description: Brunca Sign Language
+Added: 2022-02-25
+Preferred-Value: rnb
+Prefix: sgn
+%%
+Type: extlang
Subtag: rsi
Description: Rennellese Sign Language
Added: 2009-07-29
@@ -43973,6 +44185,13 @@
Prefix: sgn
%%
Type: extlang
+Subtag: rsn
+Description: Rwandan Sign Language
+Added: 2022-02-25
+Preferred-Value: rsn
+Prefix: sgn
+%%
+Type: extlang
Subtag: sdl
Description: Saudi Arabian Sign Language
Added: 2009-07-29
@@ -44793,6 +45012,11 @@
Added: 2005-10-16
%%
Type: script
+Subtag: Kawi
+Description: Kawi
+Added: 2021-12-24
+%%
+Type: script
Subtag: Khar
Description: Kharoshthi
Added: 2005-10-16
@@ -45012,6 +45236,11 @@
Added: 2005-10-16
%%
Type: script
+Subtag: Nagm
+Description: Nag Mundari
+Added: 2021-12-24
+%%
+Type: script
Subtag: Nand
Description: Nandinagari
Added: 2018-10-28
@@ -45290,6 +45519,11 @@
Added: 2006-07-21
%%
Type: script
+Subtag: Sunu
+Description: Sunuwar
+Added: 2021-12-24
+%%
+Type: script
Subtag: Sylo
Description: Syloti Nagri
Added: 2005-10-16
@@ -46736,6 +46970,7 @@
%%
Type: region
Subtag: TR
+Description: Türkiye
Description: Turkey
Added: 2005-10-16
%%
@@ -47357,6 +47592,12 @@
Comments: Indicates that the content is transcribed according to X-SAMPA
%%
Type: variant
+Subtag: gallo
+Description: Gallo
+Added: 2021-08-05
+Prefix: fr
+%%
+Type: variant
Subtag: gascon
Description: Gascon
Added: 2018-04-22
@@ -47526,6 +47767,19 @@
dialects of Resian
%%
Type: variant
+Subtag: ltg1929
+Description: The Latgalian language orthography codified in 1929
+Added: 2022-08-05
+Prefix: ltg
+%%
+Type: variant
+Subtag: ltg2007
+Description: The Latgalian language orthography codified in the language
+ law in 2007
+Added: 2022-06-23
+Prefix: ltg
+%%
+Type: variant
Subtag: luna1918
Description: Post-1917 Russian orthography
Added: 2010-10-10
@@ -47779,6 +48033,13 @@
"idioms" of the Romansh language.
%%
Type: variant
+Subtag: synnejyl
+Description: Synnejysk
+Description: South Jutish
+Added: 2021-07-17
+Prefix: da
+%%
+Type: variant
Subtag: tarask
Description: Belarusian in Taraskievica orthography
Added: 2007-04-27
diff -Nru openjdk-17-17.0.6+10/make/hotspot/test/GtestImage.gmk openjdk-17-17.0.7+7/make/hotspot/test/GtestImage.gmk
--- openjdk-17-17.0.6+10/make/hotspot/test/GtestImage.gmk 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/hotspot/test/GtestImage.gmk 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2016, 2023, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -41,10 +41,22 @@
$(foreach v, $(JVM_VARIANTS), \
$(eval $(call SetupCopyFiles, COPY_GTEST_MSVCR_$v, \
DEST := $(TEST_IMAGE_DIR)/hotspot/gtest/$v, \
- FILES := $(MSVCR_DLL) $(VCRUNTIME_1_DLL) $(MSVCP_DLL), \
+ FILES := $(MSVCR_DLL), \
FLATTEN := true, \
)) \
$(eval TARGETS += $$(COPY_GTEST_MSVCR_$v)) \
+ $(eval $(call SetupCopyFiles, COPY_GTEST_VCRUNTIME_1_$v, \
+ DEST := $(TEST_IMAGE_DIR)/hotspot/gtest/$v, \
+ FILES := $(VCRUNTIME_1_DLL), \
+ FLATTEN := true, \
+ )) \
+ $(eval TARGETS += $$(COPY_GTEST_VCRUNTIME_1_$v)) \
+ $(eval $(call SetupCopyFiles, COPY_GTEST_MSVCP_$v, \
+ DEST := $(TEST_IMAGE_DIR)/hotspot/gtest/$v, \
+ FILES := $(MSVCP_DLL), \
+ FLATTEN := true, \
+ )) \
+ $(eval TARGETS += $$(COPY_GTEST_MSVCP_$v)) \
$(if $(call equals, $(COPY_DEBUG_SYMBOLS), true), \
$(eval $(call SetupCopyFiles, COPY_GTEST_PDB_$v, \
SRC := $(HOTSPOT_OUTPUTDIR)/variant-$v/libjvm/gtest, \
diff -Nru openjdk-17-17.0.6+10/make/modules/java.desktop/lib/Awt2dLibraries.gmk openjdk-17-17.0.7+7/make/modules/java.desktop/lib/Awt2dLibraries.gmk
--- openjdk-17-17.0.6+10/make/modules/java.desktop/lib/Awt2dLibraries.gmk 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/make/modules/java.desktop/lib/Awt2dLibraries.gmk 2023-04-12 20:11:58.000000000 +0000
@@ -423,7 +423,7 @@
$(BUILD_LIBFREETYPE_CFLAGS), \
EXTRA_HEADER_DIRS := $(BUILD_LIBFREETYPE_HEADER_DIRS), \
DISABLED_WARNINGS_microsoft := 4018 4267 4244 4312 4819, \
- DISABLED_WARNINGS_gcc := implicit-fallthrough cast-function-type bad-function-cast, \
+ DISABLED_WARNINGS_gcc := implicit-fallthrough cast-function-type bad-function-cast dangling-pointer stringop-overflow, \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
))
@@ -460,7 +460,7 @@
HARFBUZZ_DISABLED_WARNINGS_gcc := type-limits missing-field-initializers strict-aliasing
HARFBUZZ_DISABLED_WARNINGS_CXX_gcc := reorder delete-non-virtual-dtor strict-overflow \
- maybe-uninitialized class-memaccess unused-result extra
+ maybe-uninitialized class-memaccess unused-result extra use-after-free
HARFBUZZ_DISABLED_WARNINGS_clang := unused-value incompatible-pointer-types \
tautological-constant-out-of-range-compare int-to-pointer-cast \
undef missing-field-initializers range-loop-analysis \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/aarch64.ad openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/aarch64.ad
--- openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/aarch64.ad 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/aarch64.ad 2023-04-12 20:11:58.000000000 +0000
@@ -3440,7 +3440,8 @@
__ mov_metadata(dst_reg, (Metadata*)con);
} else {
assert(rtype == relocInfo::none, "unexpected reloc type");
- if (con < (address)(uintptr_t)os::vm_page_size()) {
+ if (! __ is_valid_AArch64_address(con) ||
+ con < (address)(uintptr_t)os::vm_page_size()) {
__ mov(dst_reg, con);
} else {
uint64_t offset;
@@ -3916,7 +3917,7 @@
// Handle existing monitor.
__ ldr(tmp, Address(oop, oopDesc::mark_offset_in_bytes()));
- __ tbnz(disp_hdr, exact_log2(markWord::monitor_value), object_has_monitor);
+ __ tbnz(tmp, exact_log2(markWord::monitor_value), object_has_monitor);
// Check if it is still a light weight lock, this is is true if we
// see the stack address of the basicLock in the markWord of the
@@ -4922,7 +4923,7 @@
match(iRegP_R0);
//match(iRegP_R2);
//match(iRegP_R4);
- //match(iRegP_R5);
+ match(iRegP_R5);
match(thread_RegP);
op_cost(0);
format %{ %}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2020, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -131,7 +131,7 @@
// Enable vendor specific features
// Ampere eMAG
- if (_cpu == CPU_AMCC && (_model == 0) && (_variant == 0x3)) {
+ if (_cpu == CPU_AMCC && (_model == CPU_MODEL_EMAG) && (_variant == 0x3)) {
if (FLAG_IS_DEFAULT(AvoidUnalignedAccesses)) {
FLAG_SET_DEFAULT(AvoidUnalignedAccesses, true);
}
@@ -143,6 +143,13 @@
}
}
+ // Ampere CPUs: Ampere-1 and Ampere-1A
+ if (_cpu == CPU_AMPERE && ((_model == CPU_MODEL_AMPERE_1) || (_model == CPU_MODEL_AMPERE_1A))) {
+ if (FLAG_IS_DEFAULT(UseSIMDForMemoryOps)) {
+ FLAG_SET_DEFAULT(UseSIMDForMemoryOps, true);
+ }
+ }
+
// ThunderX
if (_cpu == CPU_CAVIUM && (_model == 0xA1)) {
guarantee(_variant != 0, "Pre-release hardware no longer supported.");
@@ -197,8 +204,10 @@
}
}
- // Neoverse N1
- if (_cpu == CPU_ARM && (_model == 0xd0c || _model2 == 0xd0c)) {
+ // Neoverse N1, N2 and V1
+ if (_cpu == CPU_ARM && ((_model == 0xd0c || _model2 == 0xd0c)
+ || (_model == 0xd49 || _model2 == 0xd49)
+ || (_model == 0xd40 || _model2 == 0xd40))) {
if (FLAG_IS_DEFAULT(UseSIMDForMemoryOps)) {
FLAG_SET_DEFAULT(UseSIMDForMemoryOps, true);
}
@@ -479,5 +488,41 @@
_spin_wait = get_spin_wait_desc();
+ check_virtualizations();
+
UNSUPPORTED_OPTION(CriticalJNINatives);
}
+
+void VM_Version::check_virtualizations() {
+#if defined(LINUX)
+ const char* info_file = "/sys/devices/virtual/dmi/id/product_name";
+ // check for various strings in the dmi data indicating virtualizations
+ char line[500];
+ FILE* fp = os::fopen(info_file, "r");
+ if (fp == nullptr) {
+ return;
+ }
+ while (fgets(line, sizeof(line), fp) != nullptr) {
+ if (strcasestr(line, "KVM") != 0) {
+ Abstract_VM_Version::_detected_virtualization = KVM;
+ break;
+ }
+ if (strcasestr(line, "VMware") != 0) {
+ Abstract_VM_Version::_detected_virtualization = VMWare;
+ break;
+ }
+ }
+ fclose(fp);
+#endif
+}
+
+void VM_Version::print_platform_virtualization_info(outputStream* st) {
+#if defined(LINUX)
+ VirtualizationType vrt = VM_Version::get_detected_virtualization();
+ if (vrt == KVM) {
+ st->print_cr("KVM virtualization detected");
+ } else if (vrt == VMWare) {
+ st->print_cr("VMWare virtualization detected");
+ }
+#endif
+}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -60,6 +60,9 @@
public:
// Initialization
static void initialize();
+ static void check_virtualizations();
+
+ static void print_platform_virtualization_info(outputStream*);
// Asserts
static void assert_is_initialized() {
@@ -99,6 +102,14 @@
CPU_APPLE = 'a',
};
+ enum Ampere_CPU_Model {
+ CPU_MODEL_EMAG = 0x0, /* CPU implementer is CPU_AMCC */
+ CPU_MODEL_ALTRA = 0xd0c, /* CPU implementer is CPU_ARM, Neoverse N1 */
+ CPU_MODEL_ALTRAMAX = 0xd0c, /* CPU implementer is CPU_ARM, Neoverse N1 */
+ CPU_MODEL_AMPERE_1 = 0xac3, /* CPU implementer is CPU_AMPERE */
+ CPU_MODEL_AMPERE_1A = 0xac4 /* CPU implementer is CPU_AMPERE */
+ };
+
enum Feature_Flag {
#define CPU_FEATURE_FLAGS(decl) \
decl(FP, "fp", 0) \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/assembler_x86.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/assembler_x86.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/assembler_x86.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/assembler_x86.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -4713,6 +4713,18 @@
emit_int16(0x00, (0xC0 | encode));
}
+void Assembler::evpshufb(XMMRegister dst, KRegister mask, XMMRegister nds, XMMRegister src, bool merge, int vector_len) {
+ assert(VM_Version::supports_avx512bw() && (vector_len == AVX_512bit || VM_Version::supports_avx512vl()), "");
+ InstructionAttr attributes(vector_len, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ false, /* uses_vl */ true);
+ attributes.set_is_evex_instruction();
+ attributes.set_embedded_opmask_register_specifier(mask);
+ if (merge) {
+ attributes.reset_is_clear_context();
+ }
+ int encode = simd_prefix_and_encode(dst, nds, src, VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ emit_int16(0x00, (0xC0 | encode));
+}
+
void Assembler::pshufb(XMMRegister dst, Address src) {
assert(VM_Version::supports_ssse3(), "");
InstructionMark im(this);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/assembler_x86.hpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/assembler_x86.hpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/assembler_x86.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/assembler_x86.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1853,6 +1853,7 @@
void pshufb(XMMRegister dst, XMMRegister src);
void pshufb(XMMRegister dst, Address src);
void vpshufb(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len);
+ void evpshufb(XMMRegister dst, KRegister mask, XMMRegister nds, XMMRegister src, bool merge, int vector_len);
// Shuffle Packed Doublewords
void pshufd(XMMRegister dst, XMMRegister src, int mode);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -3922,3 +3922,49 @@
}
}
#endif
+
+void C2_MacroAssembler::rearrange_bytes(XMMRegister dst, XMMRegister shuffle, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp, KRegister ktmp,
+ int vlen_enc) {
+ assert(VM_Version::supports_avx512bw(), "");
+ // Byte shuffles are inlane operations and indices are determined using
+ // lower 4 bit of each shuffle lane, thus all shuffle indices are
+ // normalized to index range 0-15. This makes sure that all the multiples
+ // of an index value are placed at same relative position in 128 bit
+ // lane i.e. elements corresponding to shuffle indices 16, 32 and 64
+ // will be 16th element in their respective 128 bit lanes.
+ movl(rtmp, 16);
+ evpbroadcastb(xtmp1, rtmp, vlen_enc);
+
+ // Compute a mask for shuffle vector by comparing indices with expression INDEX < 16,
+ // Broadcast first 128 bit lane across entire vector, shuffle the vector lanes using
+ // original shuffle indices and move the shuffled lanes corresponding to true
+ // mask to destination vector.
+ evpcmpb(ktmp, k0, shuffle, xtmp1, Assembler::lt, true, vlen_enc);
+ evshufi64x2(xtmp2, src, src, 0x0, vlen_enc);
+ evpshufb(dst, ktmp, xtmp2, shuffle, false, vlen_enc);
+
+ // Perform above steps with lane comparison expression as INDEX >= 16 && INDEX < 32
+ // and broadcasting second 128 bit lane.
+ evpcmpb(ktmp, k0, shuffle, xtmp1, Assembler::nlt, true, vlen_enc);
+ vpsllq(xtmp2, xtmp1, 0x1, vlen_enc);
+ evpcmpb(ktmp, ktmp, shuffle, xtmp2, Assembler::lt, true, vlen_enc);
+ evshufi64x2(xtmp3, src, src, 0x55, vlen_enc);
+ evpshufb(dst, ktmp, xtmp3, shuffle, true, vlen_enc);
+
+ // Perform above steps with lane comparison expression as INDEX >= 32 && INDEX < 48
+ // and broadcasting third 128 bit lane.
+ evpcmpb(ktmp, k0, shuffle, xtmp2, Assembler::nlt, true, vlen_enc);
+ vpaddb(xtmp1, xtmp1, xtmp2, vlen_enc);
+ evpcmpb(ktmp, ktmp, shuffle, xtmp1, Assembler::lt, true, vlen_enc);
+ evshufi64x2(xtmp3, src, src, 0xAA, vlen_enc);
+ evpshufb(dst, ktmp, xtmp3, shuffle, true, vlen_enc);
+
+ // Perform above steps with lane comparison expression as INDEX >= 48 && INDEX < 64
+ // and broadcasting third 128 bit lane.
+ evpcmpb(ktmp, k0, shuffle, xtmp1, Assembler::nlt, true, vlen_enc);
+ vpsllq(xtmp2, xtmp2, 0x1, vlen_enc);
+ evpcmpb(ktmp, ktmp, shuffle, xtmp2, Assembler::lt, true, vlen_enc);
+ evshufi64x2(xtmp3, src, src, 0xFF, vlen_enc);
+ evpshufb(dst, ktmp, xtmp3, shuffle, true, vlen_enc);
+}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -274,4 +274,7 @@
Register limit, Register result, Register chr,
XMMRegister vec1, XMMRegister vec2, bool is_char, KRegister mask = knoreg);
+ void rearrange_bytes(XMMRegister dst, XMMRegister shuffle, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp, KRegister ktmp, int vlen_enc);
+
#endif // CPU_X86_C2_MACROASSEMBLER_X86_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/macroAssembler_x86_md5.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/macroAssembler_x86_md5.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/macroAssembler_x86_md5.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/macroAssembler_x86_md5.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -66,16 +66,18 @@
movl(rdx, Address(rdi, 12));
#define FF(r1, r2, r3, r4, k, s, t) \
+ addl(r1, t); \
movl(rsi, r3); \
addl(r1, Address(buf, k*4)); \
xorl(rsi, r4); \
andl(rsi, r2); \
xorl(rsi, r4); \
- leal(r1, Address(r1, rsi, Address::times_1, t)); \
+ addl(r1, rsi); \
roll(r1, s); \
addl(r1, r2);
#define GG(r1, r2, r3, r4, k, s, t) \
+ addl(r1, t); \
movl(rsi, r4); \
movl(rdi, r4); \
addl(r1, Address(buf, k*4)); \
@@ -83,26 +85,28 @@
andl(rdi, r2); \
andl(rsi, r3); \
orl(rsi, rdi); \
- leal(r1, Address(r1, rsi, Address::times_1, t)); \
+ addl(r1, rsi); \
roll(r1, s); \
addl(r1, r2);
#define HH(r1, r2, r3, r4, k, s, t) \
+ addl(r1, t); \
movl(rsi, r3); \
addl(r1, Address(buf, k*4)); \
xorl(rsi, r4); \
xorl(rsi, r2); \
- leal(r1, Address(r1, rsi, Address::times_1, t)); \
+ addl(r1, rsi); \
roll(r1, s); \
addl(r1, r2);
#define II(r1, r2, r3, r4, k, s, t) \
+ addl(r1, t); \
movl(rsi, r4); \
notl(rsi); \
addl(r1, Address(buf, k*4)); \
orl(rsi, r2); \
xorl(rsi, r3); \
- leal(r1, Address(r1, rsi, Address::times_1, t)); \
+ addl(r1, rsi); \
roll(r1, s); \
addl(r1, r2);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/nativeInst_x86.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/x86/nativeInst_x86.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/nativeInst_x86.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/nativeInst_x86.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -510,12 +510,27 @@
//
void NativeJump::patch_verified_entry(address entry, address verified_entry, address dest) {
// complete jump instruction (to be inserted) is in code_buffer;
+#ifdef _LP64
+ union {
+ jlong cb_long;
+ unsigned char code_buffer[8];
+ } u;
+
+ u.cb_long = *(jlong *)verified_entry;
+
+ intptr_t disp = (intptr_t)dest - ((intptr_t)verified_entry + 1 + 4);
+ guarantee(disp == (intptr_t)(int32_t)disp, "must be 32-bit offset");
+
+ u.code_buffer[0] = instruction_code;
+ *(int32_t*)(u.code_buffer + 1) = (int32_t)disp;
+
+ Atomic::store((jlong *) verified_entry, u.cb_long);
+ ICache::invalidate_range(verified_entry, 8);
+
+#else
unsigned char code_buffer[5];
code_buffer[0] = instruction_code;
intptr_t disp = (intptr_t)dest - ((intptr_t)verified_entry + 1 + 4);
-#ifdef AMD64
- guarantee(disp == (intptr_t)(int32_t)disp, "must be 32-bit offset");
-#endif // AMD64
*(int32_t*)(code_buffer + 1) = (int32_t)disp;
check_verified_entry_alignment(entry, verified_entry);
@@ -546,6 +561,7 @@
*(int32_t*)verified_entry = *(int32_t *)code_buffer;
// Invalidate. Opteron requires a flush after every write.
n_jump->wrote(0);
+#endif // _LP64
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/x86/x86.ad openjdk-17-17.0.7+7/src/hotspot/cpu/x86/x86.ad
--- openjdk-17-17.0.6+10/src/hotspot/cpu/x86/x86.ad 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/x86/x86.ad 2023-04-12 20:11:58.000000000 +0000
@@ -1788,10 +1788,6 @@
return false; // Implementation limitation due to how shuffle is loaded
} else if (size_in_bits == 256 && UseAVX < 2) {
return false; // Implementation limitation
- } else if (bt == T_BYTE && size_in_bits > 256 && !VM_Version::supports_avx512_vbmi()) {
- return false; // Implementation limitation
- } else if (bt == T_SHORT && size_in_bits > 256 && !VM_Version::supports_avx512bw()) {
- return false; // Implementation limitation
}
break;
case Op_VectorLoadMask:
@@ -7721,7 +7717,23 @@
ins_pipe( pipe_slow );
%}
-instruct rearrangeB_evex(vec dst, vec src, vec shuffle) %{
+
+instruct rearrangeB_evex(vec dst, vec src, vec shuffle, vec xtmp1, vec xtmp2, vec xtmp3, kReg ktmp, rRegI rtmp) %{
+ predicate(vector_element_basic_type(n) == T_BYTE &&
+ vector_length(n) > 32 && !VM_Version::supports_avx512_vbmi());
+ match(Set dst (VectorRearrange src shuffle));
+ effect(TEMP dst, TEMP xtmp1, TEMP xtmp2, TEMP xtmp3, TEMP ktmp, TEMP rtmp);
+ format %{ "vector_rearrange $dst, $shuffle, $src!\t using $xtmp1, $xtmp2, $xtmp3, $rtmp and $ktmp as TEMP" %}
+ ins_encode %{
+ int vlen_enc = vector_length_encoding(this);
+ __ rearrange_bytes($dst$$XMMRegister, $shuffle$$XMMRegister, $src$$XMMRegister,
+ $xtmp1$$XMMRegister, $xtmp2$$XMMRegister, $xtmp3$$XMMRegister,
+ $rtmp$$Register, $ktmp$$KRegister, vlen_enc);
+ %}
+ ins_pipe( pipe_slow );
+%}
+
+instruct rearrangeB_evex_vbmi(vec dst, vec src, vec shuffle) %{
predicate(vector_element_basic_type(n) == T_BYTE &&
vector_length(n) >= 32 && VM_Version::supports_avx512_vbmi());
match(Set dst (VectorRearrange src shuffle));
diff -Nru openjdk-17-17.0.6+10/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp openjdk-17-17.0.7+7/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp
--- openjdk-17-17.0.6+10/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -186,9 +186,17 @@
// Call the interpreter
if (JvmtiExport::can_post_interpreter_events()) {
- BytecodeInterpreter::run(istate);
+ if (RewriteBytecodes) {
+ BytecodeInterpreter::run(istate);
+ } else {
+ BytecodeInterpreter::run(istate);
+ }
} else {
- BytecodeInterpreter::run(istate);
+ if (RewriteBytecodes) {
+ BytecodeInterpreter::run(istate);
+ } else {
+ BytecodeInterpreter::run(istate);
+ }
}
fixup_after_potential_safepoint();
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/aix/os_aix.cpp openjdk-17-17.0.7+7/src/hotspot/os/aix/os_aix.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/aix/os_aix.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/aix/os_aix.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -903,8 +903,10 @@
// and save the caller's signal mask
PosixSignals::hotspot_sigmask(thread);
- log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", kernel thread id: " UINTX_FORMAT ").",
- os::current_thread_id(), (uintx) kernel_thread_id);
+ log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", kernel thread id: " UINTX_FORMAT
+ ", stack: " PTR_FORMAT " - " PTR_FORMAT " (" SIZE_FORMAT "k) ).",
+ os::current_thread_id(), (uintx) kernel_thread_id,
+ p2i(thread->stack_base()), p2i(thread->stack_end()), thread->stack_size());
return true;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/bsd/os_bsd.cpp openjdk-17-17.0.7+7/src/hotspot/os/bsd/os_bsd.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/bsd/os_bsd.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/bsd/os_bsd.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -728,9 +728,10 @@
// and save the caller's signal mask
PosixSignals::hotspot_sigmask(thread);
- log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", pthread id: " UINTX_FORMAT ").",
- os::current_thread_id(), (uintx) pthread_self());
-
+ log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", pthread id: " UINTX_FORMAT
+ ", stack: " PTR_FORMAT " - " PTR_FORMAT " (" SIZE_FORMAT "k) ).",
+ os::current_thread_id(), (uintx) pthread_self(),
+ p2i(thread->stack_base()), p2i(thread->stack_end()), thread->stack_size());
return true;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/linux/os_linux.cpp openjdk-17-17.0.7+7/src/hotspot/os/linux/os_linux.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/linux/os_linux.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/linux/os_linux.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -983,8 +983,10 @@
// and save the caller's signal mask
PosixSignals::hotspot_sigmask(thread);
- log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", pthread id: " UINTX_FORMAT ").",
- os::current_thread_id(), (uintx) pthread_self());
+ log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", pthread id: " UINTX_FORMAT
+ ", stack: " PTR_FORMAT " - " PTR_FORMAT " (" SIZE_FORMAT "k) ).",
+ os::current_thread_id(), (uintx) pthread_self(),
+ p2i(thread->stack_base()), p2i(thread->stack_end()), thread->stack_size());
return true;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/posix/os_posix.cpp openjdk-17-17.0.7+7/src/hotspot/os/posix/os_posix.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/posix/os_posix.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/posix/os_posix.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -54,6 +54,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -561,6 +562,33 @@
st->cr();
}
+// Print all active locale categories, one line each
+void os::Posix::print_active_locale(outputStream* st) {
+ st->print_cr("Active Locale:");
+ // Posix is quiet about how exactly LC_ALL is implemented.
+ // Just print it out too, in case LC_ALL is held separately
+ // from the individual categories.
+ #define LOCALE_CAT_DO(f) \
+ f(LC_ALL) \
+ f(LC_COLLATE) \
+ f(LC_CTYPE) \
+ f(LC_MESSAGES) \
+ f(LC_MONETARY) \
+ f(LC_NUMERIC) \
+ f(LC_TIME)
+ #define XX(cat) { cat, #cat },
+ const struct { int c; const char* name; } categories[] = {
+ LOCALE_CAT_DO(XX)
+ { -1, NULL }
+ };
+ #undef XX
+ #undef LOCALE_CAT_DO
+ for (int i = 0; categories[i].c != -1; i ++) {
+ const char* locale = setlocale(categories[i].c, NULL);
+ st->print_cr("%s=%s", categories[i].name,
+ ((locale != NULL) ? locale : ""));
+ }
+}
bool os::get_host_name(char* buf, size_t buflen) {
struct utsname name;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/posix/os_posix.hpp openjdk-17-17.0.7+7/src/hotspot/os/posix/os_posix.hpp
--- openjdk-17-17.0.6+10/src/hotspot/os/posix/os_posix.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/posix/os_posix.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1999, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -95,6 +95,8 @@
static bool handle_stack_overflow(JavaThread* thread, address addr, address pc,
const void* ucVoid,
address* stub);
+
+ static void print_active_locale(outputStream* st);
};
/*
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/posix/perfMemory_posix.cpp openjdk-17-17.0.7+7/src/hotspot/os/posix/perfMemory_posix.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/posix/perfMemory_posix.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/posix/perfMemory_posix.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -47,6 +47,10 @@
# include
# include
+#if defined(LINUX)
+# include
+#endif
+
static char* backing_store_file_name = NULL; // name of the backing store
// file, if successfully created.
@@ -75,18 +79,6 @@
return mapAddress;
}
-// delete the PerfData memory region
-//
-static void delete_standard_memory(char* addr, size_t size) {
-
- // there are no persistent external resources to cleanup for standard
- // memory. since DestroyJavaVM does not support unloading of the JVM,
- // cleanup of the memory resource is not performed. The memory will be
- // reclaimed by the OS upon termination of the process.
- //
- return;
-}
-
// save the specified memory region to the given file
//
// Note: this function might be called from signal handler (by os::abort()),
@@ -707,17 +699,17 @@
}
}
-
-// cleanup stale shared memory resources
+// cleanup stale shared memory files
//
// This method attempts to remove all stale shared memory files in
// the named user temporary directory. It scans the named directory
-// for files matching the pattern ^$[0-9]*$. For each file found, the
-// process id is extracted from the file name and a test is run to
-// determine if the process is alive. If the process is not alive,
-// any stale file resources are removed.
+// for files matching the pattern ^$[0-9]*$.
+//
+// This directory should be used only by JVM processes owned by the
+// current user to store PerfMemory files. Any other files found
+// in this directory may be removed.
//
-static void cleanup_sharedmem_resources(const char* dirname) {
+static void cleanup_sharedmem_files(const char* dirname) {
int saved_cwd_fd;
// open the directory and set the current working directory to it
@@ -727,48 +719,95 @@
return;
}
- // for each entry in the directory that matches the expected file
- // name pattern, determine if the file resources are stale and if
- // so, remove the file resources. Note, instrumented HotSpot processes
- // for this user may start and/or terminate during this search and
- // remove or create new files in this directory. The behavior of this
- // loop under these conditions is dependent upon the implementation of
- // opendir/readdir.
+ // For each entry in the directory that matches the expected file
+ // name pattern, remove the file if it's determine to be stale
+ // Note, instrumented HotSpot processes for this user may start and/or
+ // terminate during this search and remove or create new files in this
+ // directory. The behavior of this loop under these conditions is dependent
+ // upon the implementation of opendir/readdir.
//
struct dirent* entry;
errno = 0;
while ((entry = os::readdir(dirp)) != NULL) {
-
- pid_t pid = filename_to_pid(entry->d_name);
+ const char* filename = entry->d_name;
+ pid_t pid = filename_to_pid(filename);
if (pid == 0) {
-
- if (strcmp(entry->d_name, ".") != 0 && strcmp(entry->d_name, "..") != 0) {
+ if (strcmp(filename, ".") != 0 && strcmp(filename, "..") != 0) {
// attempt to remove all unexpected files, except "." and ".."
- unlink(entry->d_name);
+ unlink(filename);
}
errno = 0;
continue;
}
- // we now have a file name that converts to a valid integer
- // that could represent a process id . if this process id
- // matches the current process id or the process is not running,
- // then remove the stale file resources.
- //
- // process liveness is detected by sending signal number 0 to
- // the process id (see kill(2)). if kill determines that the
- // process does not exist, then the file resources are removed.
- // if kill determines that that we don't have permission to
- // signal the process, then the file resources are assumed to
- // be stale and are removed because the resources for such a
- // process should be in a different user specific directory.
+#if defined(LINUX)
+ // Special case on Linux, if multiple containers share the
+ // same /tmp directory:
//
- if ((pid == os::current_process_id()) ||
- (kill(pid, 0) == OS_ERR && (errno == ESRCH || errno == EPERM))) {
- unlink(entry->d_name);
+ // - All the JVMs must have the JDK-8286030 fix, or the behavior
+ // is undefined.
+ // - We cannot rely on the values of the pid, because it could
+ // be a process in a different namespace. We must use the flock
+ // protocol to determine if a live process is using this file.
+ // See create_sharedmem_file().
+ int fd;
+ RESTARTABLE(os::open(filename, O_RDONLY, 0), fd);
+ if (fd == OS_ERR) {
+ // Something wrong happened. Ignore the error and don't try to remove the
+ // file.
+ log_debug(perf, memops)("os::open() for stale file check failed for %s/%s", dirname, filename);
+ errno = 0;
+ continue;
+ }
+
+ int n;
+ RESTARTABLE(::flock(fd, LOCK_EX|LOCK_NB), n);
+ if (n != 0) {
+ // Either another process holds the exclusive lock on this file, or
+ // something wrong happened. Ignore the error and don't try to remove the
+ // file.
+ log_debug(perf, memops)("flock for stale file check failed for %s/%s", dirname, filename);
+ ::close(fd);
+ errno = 0;
+ continue;
+ }
+ // We are able to lock the file, but this file might have been created
+ // by an older JVM that doesn't use the flock prototol, so we must do
+ // the folowing checks (which are also done by older JVMs).
+#endif
+
+ // The following code assumes that pid must be in the same
+ // namespace as the current process.
+ bool stale = false;
+
+ if (pid == os::current_process_id()) {
+ // The file was created by a terminated process that happened
+ // to have the same pid as the current process.
+ stale = true;
+ } else if (kill(pid, 0) == OS_ERR) {
+ if (errno == ESRCH) {
+ // The target process does not exist.
+ stale = true;
+ } else if (errno == EPERM) {
+ // The file was created by a terminated process that happened
+ // to have the same pid as a process not owned by the current user.
+ stale = true;
+ }
}
+
+ if (stale) {
+ log_info(perf, memops)("Remove stale file %s/%s", dirname, filename);
+ unlink(filename);
+ }
+
+#if defined(LINUX)
+ // Hold the lock until here to prevent other JVMs from using this file
+ // while we were in the middle of deleting it.
+ ::close(fd);
+#endif
+
errno = 0;
}
@@ -814,13 +853,13 @@
return true;
}
-// create the shared memory file resources
+// create the shared memory file
//
// This method creates the shared memory file with the given size
// This method also creates the user specific temporary directory, if
// it does not yet exist.
//
-static int create_sharedmem_resources(const char* dirname, const char* filename, size_t size) {
+static int create_sharedmem_file(const char* dirname, const char* filename, size_t size) {
// make the user temporary directory
if (!make_user_tmp_dir(dirname)) {
@@ -868,6 +907,32 @@
return -1;
}
+#if defined(LINUX)
+ // On Linux, different containerized processes that share the same /tmp
+ // directory (e.g., with "docker --volume ...") may have the same pid and
+ // try to use the same file. To avoid conflicts among such
+ // processes, we allow only one of them (the winner of the flock() call)
+ // to write to the file. All the other processes will give up and will
+ // have perfdata disabled.
+ //
+ // Note that the flock will be automatically given up when the winner
+ // process exits.
+ //
+ // The locking protocol works only with other JVMs that have the JDK-8286030
+ // fix. If you are sharing the /tmp difrectory among different containers,
+ // do not use older JVMs that don't have this fix, or the behavior is undefined.
+ int n;
+ RESTARTABLE(::flock(fd, LOCK_EX|LOCK_NB), n);
+ if (n != 0) {
+ log_warning(perf, memops)("Cannot use file %s/%s because %s (errno = %d)", dirname, filename,
+ (errno == EWOULDBLOCK) ?
+ "it is locked by another process" :
+ "flock() failed", errno);
+ ::close(fd);
+ return -1;
+ }
+#endif
+
// truncate the file to get rid of any existing data
RESTARTABLE(::ftruncate(fd, (off_t)0), result);
if (result == OS_ERR) {
@@ -982,12 +1047,13 @@
}
// cleanup any stale shared memory files
- cleanup_sharedmem_resources(dirname);
+ cleanup_sharedmem_files(dirname);
assert(((size > 0) && (size % os::vm_page_size() == 0)),
"unexpected PerfMemory region size");
- fd = create_sharedmem_resources(dirname, short_filename, size);
+ log_info(perf, memops)("Trying to open %s/%s", dirname, short_filename);
+ fd = create_sharedmem_file(dirname, short_filename, size);
FREE_C_HEAP_ARRAY(char, user_name);
FREE_C_HEAP_ARRAY(char, dirname);
@@ -1020,6 +1086,8 @@
// it does not go through os api, the operation has to record from here
MemTracker::record_virtual_memory_reserve_and_commit((address)mapAddress, size, CURRENT_PC, mtInternal);
+ log_info(perf, memops)("Successfully opened");
+
return mapAddress;
}
@@ -1049,10 +1117,10 @@
//
static void delete_shared_memory(char* addr, size_t size) {
- // cleanup the persistent shared memory resources. since DestroyJavaVM does
- // not support unloading of the JVM, unmapping of the memory resource is
+ // Remove the shared memory file. Since DestroyJavaVM does
+ // not support unloading of the JVM, unmapping of the memory region is
// not performed. The memory will be reclaimed by the OS upon termination of
- // the process. The backing store file is deleted from the file system.
+ // the process.
assert(!PerfDisableSharedMem, "shouldn't be here");
@@ -1261,10 +1329,7 @@
save_memory_to_file(start(), capacity());
}
- if (PerfDisableSharedMem) {
- delete_standard_memory(start(), capacity());
- }
- else {
+ if (!PerfDisableSharedMem) {
delete_shared_memory(start(), capacity());
}
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/posix/signals_posix.cpp openjdk-17-17.0.7+7/src/hotspot/os/posix/signals_posix.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/posix/signals_posix.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/posix/signals_posix.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1381,7 +1381,6 @@
st->print(", flags=");
int flags = get_sanitized_sa_flags(act);
print_sa_flags(st, flags);
-
}
// Print established signal handler for this signal.
@@ -1398,6 +1397,11 @@
sigaction(sig, NULL, ¤t_act);
print_single_signal_handler(st, ¤t_act, buf, buflen);
+
+ sigset_t thread_sig_mask;
+ if (::pthread_sigmask(/* ignored */ SIG_BLOCK, NULL, &thread_sig_mask) == 0) {
+ st->print(", %s", sigismember(&thread_sig_mask, sig) ? "blocked" : "unblocked");
+ }
st->cr();
// If we expected to see our own hotspot signal handler but found a different one,
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/windows/os_windows.cpp openjdk-17-17.0.7+7/src/hotspot/os/windows/os_windows.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/windows/os_windows.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/windows/os_windows.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -624,8 +624,10 @@
thread->set_osthread(osthread);
- log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ").",
- os::current_thread_id());
+ log_info(os, thread)("Thread attached (tid: " UINTX_FORMAT ", stack: "
+ PTR_FORMAT " - " PTR_FORMAT " (" SIZE_FORMAT "k) ).",
+ os::current_thread_id(), p2i(thread->stack_base()),
+ p2i(thread->stack_end()), thread->stack_size());
return true;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os/windows/perfMemory_windows.cpp openjdk-17-17.0.7+7/src/hotspot/os/windows/perfMemory_windows.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os/windows/perfMemory_windows.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os/windows/perfMemory_windows.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -1239,6 +1239,7 @@
if (PrintMiscellaneous && Verbose) {
warning("%s directory is insecure\n", dirname);
}
+ free_security_attr(pDirSA);
return false;
}
// The administrator should be able to delete this directory.
@@ -1254,18 +1255,15 @@
dirname, lasterror);
}
}
- }
- else {
+ } else {
if (PrintMiscellaneous && Verbose) {
warning("CreateDirectory failed: %d\n", GetLastError());
}
+ free_security_attr(pDirSA);
return false;
}
}
-
- // free the security attributes structure
free_security_attr(pDirSA);
-
return true;
}
@@ -1297,6 +1295,8 @@
if (!make_user_tmp_dir(dirname)) {
// could not make/find the directory or the found directory
// was not secure
+ free_security_attr(lpFileSA);
+ free_security_attr(lpSmoSA);
return NULL;
}
@@ -1328,6 +1328,7 @@
if (PrintMiscellaneous && Verbose) {
warning("could not create file %s: %d\n", filename, lasterror);
}
+ free_security_attr(lpSmoSA);
return NULL;
}
@@ -1833,7 +1834,7 @@
return;
}
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
// it does not go through os api, the operation has to record from here
Tracker tkr(Tracker::release);
remove_file_mapping(addr);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp openjdk-17-17.0.7+7/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -128,6 +128,12 @@
intptr_t* sp;
intptr_t* fp;
address epc = fetch_frame_from_context(ucVoid, &sp, &fp);
+ if (!is_readable_pointer(epc)) {
+ // Try to recover from calling into bad memory
+ // Assume new frame has not been set up, the same as
+ // compiled frame stack bang
+ return fetch_compiled_frame_from_context(ucVoid);
+ }
return frame(sp, fp, epc);
}
@@ -342,7 +348,7 @@
// Note: it may be unsafe to inspect memory near pc. For example, pc may
// point to garbage if entry point in an nmethod is corrupted. Leave
// this at the end, and hope for the best.
- address pc = os::Posix::ucontext_get_pc(uc);
+ address pc = os::fetch_frame_from_context(uc).pc();
print_instructions(st, pc, 4/*native instruction size*/);
st->cr();
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp openjdk-17-17.0.7+7/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -142,6 +142,12 @@
intptr_t* sp;
intptr_t* fp;
address epc = fetch_frame_from_context(ucVoid, &sp, &fp);
+ if (!is_readable_pointer(epc)) {
+ // Try to recover from calling into bad memory
+ // Assume new frame has not been set up, the same as
+ // compiled frame stack bang
+ return fetch_compiled_frame_from_context(ucVoid);
+ }
return frame(sp, fp, epc);
}
@@ -579,7 +585,7 @@
// Note: it may be unsafe to inspect memory near pc. For example, pc may
// point to garbage if entry point in an nmethod is corrupted. Leave
// this at the end, and hope for the best.
- address pc = os::Posix::ucontext_get_pc(uc);
+ address pc = os::fetch_frame_from_context(uc).pc();
print_instructions(st, pc, sizeof(char));
st->cr();
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os_cpu/windows_aarch64/vm_version_windows_aarch64.cpp openjdk-17-17.0.7+7/src/hotspot/os_cpu/windows_aarch64/vm_version_windows_aarch64.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os_cpu/windows_aarch64/vm_version_windows_aarch64.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os_cpu/windows_aarch64/vm_version_windows_aarch64.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -23,6 +23,7 @@
*/
#include "precompiled.hpp"
+#include "logging/log.hpp"
#include "runtime/os.hpp"
#include "runtime/vm_version.hpp"
diff -Nru openjdk-17-17.0.6+10/src/hotspot/os_cpu/windows_x86/os_windows_x86.cpp openjdk-17-17.0.7+7/src/hotspot/os_cpu/windows_x86/os_windows_x86.cpp
--- openjdk-17-17.0.6+10/src/hotspot/os_cpu/windows_x86/os_windows_x86.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/os_cpu/windows_x86/os_windows_x86.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -319,6 +319,12 @@
intptr_t* sp;
intptr_t* fp;
address epc = fetch_frame_from_context(ucVoid, &sp, &fp);
+ if (!is_readable_pointer(epc)) {
+ // Try to recover from calling into bad memory
+ // Assume new frame has not been set up, the same as
+ // compiled frame stack bang
+ return frame(sp + 1, fp, (address)*sp);
+ }
return frame(sp, fp, epc);
}
@@ -450,7 +456,7 @@
// Note: it may be unsafe to inspect memory near pc. For example, pc may
// point to garbage if entry point in an nmethod is corrupted. Leave
// this at the end, and hope for the best.
- address pc = (address)uc->REG_PC;
+ address pc = os::fetch_frame_from_context(uc).pc();
print_instructions(st, pc, sizeof(char));
st->cr();
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/classfile/javaClasses.cpp openjdk-17-17.0.7+7/src/hotspot/share/classfile/javaClasses.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/classfile/javaClasses.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/classfile/javaClasses.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -2719,6 +2719,51 @@
}
}
+Handle java_lang_Throwable::get_cause_with_stack_trace(Handle throwable, TRAPS) {
+ // Call to JVM to fill in the stack trace and clear declaringClassObject to
+ // not keep classes alive in the stack trace.
+ // call this: public StackTraceElement[] getStackTrace()
+ assert(throwable.not_null(), "shouldn't be");
+
+ JavaValue result(T_ARRAY);
+ JavaCalls::call_virtual(&result, throwable,
+ vmClasses::Throwable_klass(),
+ vmSymbols::getStackTrace_name(),
+ vmSymbols::getStackTrace_signature(),
+ CHECK_NH);
+ Handle stack_trace(THREAD, result.get_oop());
+ assert(stack_trace->is_objArray(), "Should be an array");
+
+ // Throw ExceptionInInitializerError as the cause with this exception in
+ // the message and stack trace.
+
+ // Now create the message with the original exception and thread name.
+ Symbol* message = java_lang_Throwable::detail_message(throwable());
+ ResourceMark rm(THREAD);
+ stringStream st;
+ st.print("Exception %s%s ", throwable()->klass()->name()->as_klass_external_name(),
+ message == nullptr ? "" : ":");
+ if (message == NULL) {
+ st.print("[in thread \"%s\"]", THREAD->name());
+ } else {
+ st.print("%s [in thread \"%s\"]", message->as_C_string(), THREAD->name());
+ }
+
+ Symbol* exception_name = vmSymbols::java_lang_ExceptionInInitializerError();
+ Handle h_cause = Exceptions::new_exception(THREAD, exception_name, st.as_string());
+
+ // If new_exception returns a different exception while creating the exception, return null.
+ if (h_cause->klass()->name() != exception_name) {
+ log_info(class, init)("Exception thrown while saving initialization exception %s",
+ h_cause->klass()->external_name());
+ return Handle();
+ }
+ java_lang_Throwable::set_stacktrace(h_cause(), stack_trace());
+ // Clear backtrace because the stacktrace should be used instead.
+ set_backtrace(h_cause(), NULL);
+ return h_cause;
+}
+
bool java_lang_Throwable::get_top_method_and_bci(oop throwable, Method** method, int* bci) {
JavaThread* current = JavaThread::current();
objArrayHandle result(current, objArrayOop(backtrace(throwable)));
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/classfile/javaClasses.hpp openjdk-17-17.0.7+7/src/hotspot/share/classfile/javaClasses.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/classfile/javaClasses.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/classfile/javaClasses.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -567,6 +567,10 @@
static void fill_in_stack_trace(Handle throwable, const methodHandle& method = methodHandle());
// Programmatic access to stack trace
static void get_stack_trace_elements(Handle throwable, objArrayHandle stack_trace, TRAPS);
+
+ // For recreating class initialization error exceptions.
+ static Handle get_cause_with_stack_trace(Handle throwable, TRAPS);
+
// Printing
static void print(oop throwable, outputStream* st);
static void print_stack_trace(Handle throwable, outputStream* st);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/classfile/systemDictionary.cpp openjdk-17-17.0.7+7/src/hotspot/share/classfile/systemDictionary.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/classfile/systemDictionary.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/classfile/systemDictionary.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1626,6 +1626,8 @@
} else {
assert(_pd_cache_table->number_of_entries() == 0, "should be empty");
}
+
+ InstanceKlass::clean_initialization_error_table();
}
return unloading_occurred;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/classfile/vmSymbols.hpp openjdk-17-17.0.7+7/src/hotspot/share/classfile/vmSymbols.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/classfile/vmSymbols.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/classfile/vmSymbols.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -368,6 +368,7 @@
template(class_initializer_name, "") \
template(println_name, "println") \
template(printStackTrace_name, "printStackTrace") \
+ template(getStackTrace_name, "getStackTrace") \
template(main_name, "main") \
template(name_name, "name") \
template(priority_name, "priority") \
@@ -593,7 +594,9 @@
template(int_String_signature, "(I)Ljava/lang/String;") \
template(boolean_boolean_int_signature, "(ZZ)I") \
template(big_integer_shift_worker_signature, "([I[IIII)V") \
- template(reflect_method_signature, "Ljava/lang/reflect/Method;") \
+ template(reflect_method_signature, "Ljava/lang/reflect/Method;") \
+ template(getStackTrace_signature, "()[Ljava/lang/StackTraceElement;") \
+ \
/* signature symbols needed by intrinsics */ \
VM_INTRINSICS_DO(VM_INTRINSIC_IGNORE, VM_SYMBOL_IGNORE, VM_SYMBOL_IGNORE, template, VM_ALIAS_IGNORE) \
\
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/code/nmethod.cpp openjdk-17-17.0.7+7/src/hotspot/share/code/nmethod.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/code/nmethod.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/code/nmethod.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -451,6 +451,42 @@
#endif
}
+#ifdef ASSERT
+class CheckForOopsClosure : public OopClosure {
+ bool _found_oop = false;
+ public:
+ virtual void do_oop(oop* o) { _found_oop = true; }
+ virtual void do_oop(narrowOop* o) { _found_oop = true; }
+ bool found_oop() { return _found_oop; }
+};
+class CheckForMetadataClosure : public MetadataClosure {
+ bool _found_metadata = false;
+ Metadata* _ignore = nullptr;
+ public:
+ CheckForMetadataClosure(Metadata* ignore) : _ignore(ignore) {}
+ virtual void do_metadata(Metadata* md) { if (md != _ignore) _found_metadata = true; }
+ bool found_metadata() { return _found_metadata; }
+};
+
+static void assert_no_oops_or_metadata(nmethod* nm) {
+ if (nm == nullptr) return;
+ assert(nm->oop_maps() == nullptr, "expectation");
+
+ CheckForOopsClosure cfo;
+ nm->oops_do(&cfo);
+ assert(!cfo.found_oop(), "no oops allowed");
+
+ // We allow an exception for the own Method, but require its class to be permanent.
+ Method* own_method = nm->method();
+ CheckForMetadataClosure cfm(/* ignore reference to own Method */ own_method);
+ nm->metadata_do(&cfm);
+ assert(!cfm.found_metadata(), "no metadata allowed");
+
+ assert(own_method->method_holder()->class_loader_data()->is_permanent_class_loader_data(),
+ "Method's class needs to be permanent");
+}
+#endif
+
nmethod* nmethod::new_native_nmethod(const methodHandle& method,
int compile_id,
CodeBuffer *code_buffer,
@@ -470,14 +506,19 @@
CodeOffsets offsets;
offsets.set_value(CodeOffsets::Verified_Entry, vep_offset);
offsets.set_value(CodeOffsets::Frame_Complete, frame_complete);
- nm = new (native_nmethod_size, CompLevel_none)
+
+ // MH intrinsics are dispatch stubs which are compatible with NonNMethod space.
+ // IsUnloadingBehaviour::is_unloading needs to handle them separately.
+ bool allow_NonNMethod_space = method->can_be_allocated_in_NonNMethod_space();
+ nm = new (native_nmethod_size, allow_NonNMethod_space)
nmethod(method(), compiler_none, native_nmethod_size,
compile_id, &offsets,
code_buffer, frame_size,
basic_lock_owner_sp_offset,
basic_lock_sp_offset,
oop_maps);
- NOT_PRODUCT(if (nm != NULL) native_nmethod_stats.note_native_nmethod(nm));
+ DEBUG_ONLY( if (allow_NonNMethod_space) assert_no_oops_or_metadata(nm); )
+ NOT_PRODUCT(if (nm != NULL) native_nmethod_stats.note_native_nmethod(nm));
}
if (nm != NULL) {
@@ -710,6 +751,14 @@
return CodeCache::allocate(nmethod_size, CodeCache::get_code_blob_type(comp_level));
}
+void* nmethod::operator new(size_t size, int nmethod_size, bool allow_NonNMethod_space) throw () {
+ // Try MethodNonProfiled and MethodProfiled.
+ void* return_value = CodeCache::allocate(nmethod_size, CodeBlobType::MethodNonProfiled);
+ if (return_value != nullptr || !allow_NonNMethod_space) return return_value;
+ // Try NonNMethod or give up.
+ return CodeCache::allocate(nmethod_size, CodeBlobType::NonNMethod);
+}
+
nmethod::nmethod(
Method* method,
CompilerType type,
@@ -1780,7 +1829,10 @@
// oops in the CompiledMethod, by calling oops_do on it.
state_unloading_cycle = current_cycle;
- if (is_zombie()) {
+ if (is_zombie() || method()->can_be_allocated_in_NonNMethod_space()) {
+ // When the nmethod is in NonNMethod space, we may reach here without IsUnloadingBehaviour.
+ // However, we only allow this for special methods which never get unloaded.
+
// Zombies without calculated unloading epoch are never unloading due to GC.
// There are no races where a previously observed is_unloading() nmethod
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/code/nmethod.hpp openjdk-17-17.0.7+7/src/hotspot/share/code/nmethod.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/code/nmethod.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/code/nmethod.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -324,6 +324,10 @@
// helper methods
void* operator new(size_t size, int nmethod_size, int comp_level) throw();
+ // For method handle intrinsics: Try MethodNonProfiled, MethodProfiled and NonNMethod.
+ // Attention: Only allow NonNMethod space for special nmethods which don't need to be
+ // findable by nmethod iterators! In particular, they must not contain oops!
+ void* operator new(size_t size, int nmethod_size, bool allow_NonNMethod_space) throw();
const char* reloc_string_for(u_char* begin, u_char* end);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -245,8 +245,7 @@
if (do_load) {
// load original value
- // alias_idx correct??
- pre_val = __ load(__ ctrl(), adr, val_type, bt, alias_idx);
+ pre_val = __ load(__ ctrl(), adr, val_type, bt, alias_idx, false, MemNode::unordered, LoadNode::Pinned);
}
// if (pre_val != NULL)
@@ -612,7 +611,6 @@
Node* top = Compile::current()->top();
Node* offset = adr->is_AddP() ? adr->in(AddPNode::Offset) : top;
- Node* load = CardTableBarrierSetC2::load_at_resolved(access, val_type);
// If we are reading the value of the referent field of a Reference
// object (either by using Unsafe directly or through reflection)
@@ -624,12 +622,26 @@
(in_heap && unknown && offset != top && obj != top));
if (!access.is_oop() || !need_read_barrier) {
- return load;
+ return CardTableBarrierSetC2::load_at_resolved(access, val_type);
}
assert(access.is_parse_access(), "entry not supported at optimization time");
+
C2ParseAccess& parse_access = static_cast(access);
GraphKit* kit = parse_access.kit();
+ Node* load;
+
+ Node* control = kit->control();
+ const TypePtr* adr_type = access.addr().type();
+ MemNode::MemOrd mo = access.mem_node_mo();
+ bool requires_atomic_access = (decorators & MO_UNORDERED) == 0;
+ bool unaligned = (decorators & C2_UNALIGNED) != 0;
+ bool unsafe = (decorators & C2_UNSAFE_ACCESS) != 0;
+ // Pinned control dependency is the strictest. So it's ok to substitute it for any other.
+ load = kit->make_load(control, adr, val_type, access.type(), adr_type, mo,
+ LoadNode::Pinned, requires_atomic_access, unaligned, mismatched, unsafe,
+ access.barrier_data());
+
if (on_weak || on_phantom) {
// Use the pre-barrier to record the value in the referent field
@@ -664,85 +676,123 @@
return strcmp(call->_name, "write_ref_field_pre_entry") == 0 || strcmp(call->_name, "write_ref_field_post_entry") == 0;
}
-void G1BarrierSetC2::eliminate_gc_barrier(PhaseMacroExpand* macro, Node* node) const {
- assert(node->Opcode() == Op_CastP2X, "ConvP2XNode required");
- assert(node->outcnt() <= 2, "expects 1 or 2 users: Xor and URShift nodes");
- // It could be only one user, URShift node, in Object.clone() intrinsic
- // but the new allocation is passed to arraycopy stub and it could not
- // be scalar replaced. So we don't check the case.
-
- // An other case of only one user (Xor) is when the value check for NULL
- // in G1 post barrier is folded after CCP so the code which used URShift
- // is removed.
-
- // Take Region node before eliminating post barrier since it also
- // eliminates CastP2X node when it has only one user.
- Node* this_region = node->in(0);
- assert(this_region != NULL, "");
-
- // Remove G1 post barrier.
-
- // Search for CastP2X->Xor->URShift->Cmp path which
- // checks if the store done to a different from the value's region.
- // And replace Cmp with #0 (false) to collapse G1 post barrier.
- Node* xorx = node->find_out_with(Op_XorX);
- if (xorx != NULL) {
- Node* shift = xorx->unique_out();
- Node* cmpx = shift->unique_out();
- assert(cmpx->is_Cmp() && cmpx->unique_out()->is_Bool() &&
- cmpx->unique_out()->as_Bool()->_test._test == BoolTest::ne,
- "missing region check in G1 post barrier");
- macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
-
- // Remove G1 pre barrier.
-
- // Search "if (marking != 0)" check and set it to "false".
- // There is no G1 pre barrier if previous stored value is NULL
- // (for example, after initialization).
- if (this_region->is_Region() && this_region->req() == 3) {
- int ind = 1;
- if (!this_region->in(ind)->is_IfFalse()) {
- ind = 2;
- }
- if (this_region->in(ind)->is_IfFalse() &&
- this_region->in(ind)->in(0)->Opcode() == Op_If) {
- Node* bol = this_region->in(ind)->in(0)->in(1);
- assert(bol->is_Bool(), "");
- cmpx = bol->in(1);
- if (bol->as_Bool()->_test._test == BoolTest::ne &&
- cmpx->is_Cmp() && cmpx->in(2) == macro->intcon(0) &&
- cmpx->in(1)->is_Load()) {
- Node* adr = cmpx->in(1)->as_Load()->in(MemNode::Address);
- const int marking_offset = in_bytes(G1ThreadLocalData::satb_mark_queue_active_offset());
- if (adr->is_AddP() && adr->in(AddPNode::Base) == macro->top() &&
- adr->in(AddPNode::Address)->Opcode() == Op_ThreadLocal &&
- adr->in(AddPNode::Offset) == macro->MakeConX(marking_offset)) {
- macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
+bool G1BarrierSetC2::is_g1_pre_val_load(Node* n) {
+ if (n->is_Load() && n->as_Load()->has_pinned_control_dependency()) {
+ // Make sure the only users of it are: CmpP, StoreP, and a call to write_ref_field_pre_entry
+
+ // Skip possible decode
+ if (n->outcnt() == 1 && n->unique_out()->is_DecodeN()) {
+ n = n->unique_out();
+ }
+
+ if (n->outcnt() == 3) {
+ int found = 0;
+ for (SimpleDUIterator iter(n); iter.has_next(); iter.next()) {
+ Node* use = iter.get();
+ if (use->is_Cmp() || use->is_Store()) {
+ ++found;
+ } else if (use->is_CallLeaf()) {
+ CallLeafNode* call = use->as_CallLeaf();
+ if (strcmp(call->_name, "write_ref_field_pre_entry") == 0) {
+ ++found;
}
}
}
+ if (found == 3) {
+ return true;
+ }
}
+ }
+ return false;
+}
+
+bool G1BarrierSetC2::is_gc_pre_barrier_node(Node *node) const {
+ return is_g1_pre_val_load(node);
+}
+
+void G1BarrierSetC2::eliminate_gc_barrier(PhaseMacroExpand* macro, Node* node) const {
+ if (is_g1_pre_val_load(node)) {
+ macro->replace_node(node, macro->zerocon(node->as_Load()->bottom_type()->basic_type()));
} else {
- assert(!use_ReduceInitialCardMarks(), "can only happen with card marking");
- // This is a G1 post barrier emitted by the Object.clone() intrinsic.
- // Search for the CastP2X->URShiftX->AddP->LoadB->Cmp path which checks if the card
- // is marked as young_gen and replace the Cmp with 0 (false) to collapse the barrier.
- Node* shift = node->find_out_with(Op_URShiftX);
- assert(shift != NULL, "missing G1 post barrier");
- Node* addp = shift->unique_out();
- Node* load = addp->find_out_with(Op_LoadB);
- assert(load != NULL, "missing G1 post barrier");
- Node* cmpx = load->unique_out();
- assert(cmpx->is_Cmp() && cmpx->unique_out()->is_Bool() &&
- cmpx->unique_out()->as_Bool()->_test._test == BoolTest::ne,
- "missing card value check in G1 post barrier");
- macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
- // There is no G1 pre barrier in this case
- }
- // Now CastP2X can be removed since it is used only on dead path
- // which currently still alive until igvn optimize it.
- assert(node->outcnt() == 0 || node->unique_out()->Opcode() == Op_URShiftX, "");
- macro->replace_node(node, macro->top());
+ assert(node->Opcode() == Op_CastP2X, "ConvP2XNode required");
+ assert(node->outcnt() <= 2, "expects 1 or 2 users: Xor and URShift nodes");
+ // It could be only one user, URShift node, in Object.clone() intrinsic
+ // but the new allocation is passed to arraycopy stub and it could not
+ // be scalar replaced. So we don't check the case.
+
+ // An other case of only one user (Xor) is when the value check for NULL
+ // in G1 post barrier is folded after CCP so the code which used URShift
+ // is removed.
+
+ // Take Region node before eliminating post barrier since it also
+ // eliminates CastP2X node when it has only one user.
+ Node* this_region = node->in(0);
+ assert(this_region != NULL, "");
+
+ // Remove G1 post barrier.
+
+ // Search for CastP2X->Xor->URShift->Cmp path which
+ // checks if the store done to a different from the value's region.
+ // And replace Cmp with #0 (false) to collapse G1 post barrier.
+ Node* xorx = node->find_out_with(Op_XorX);
+ if (xorx != NULL) {
+ Node* shift = xorx->unique_out();
+ Node* cmpx = shift->unique_out();
+ assert(cmpx->is_Cmp() && cmpx->unique_out()->is_Bool() &&
+ cmpx->unique_out()->as_Bool()->_test._test == BoolTest::ne,
+ "missing region check in G1 post barrier");
+ macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
+
+ // Remove G1 pre barrier.
+
+ // Search "if (marking != 0)" check and set it to "false".
+ // There is no G1 pre barrier if previous stored value is NULL
+ // (for example, after initialization).
+ if (this_region->is_Region() && this_region->req() == 3) {
+ int ind = 1;
+ if (!this_region->in(ind)->is_IfFalse()) {
+ ind = 2;
+ }
+ if (this_region->in(ind)->is_IfFalse() &&
+ this_region->in(ind)->in(0)->Opcode() == Op_If) {
+ Node* bol = this_region->in(ind)->in(0)->in(1);
+ assert(bol->is_Bool(), "");
+ cmpx = bol->in(1);
+ if (bol->as_Bool()->_test._test == BoolTest::ne &&
+ cmpx->is_Cmp() && cmpx->in(2) == macro->intcon(0) &&
+ cmpx->in(1)->is_Load()) {
+ Node* adr = cmpx->in(1)->as_Load()->in(MemNode::Address);
+ const int marking_offset = in_bytes(G1ThreadLocalData::satb_mark_queue_active_offset());
+ if (adr->is_AddP() && adr->in(AddPNode::Base) == macro->top() &&
+ adr->in(AddPNode::Address)->Opcode() == Op_ThreadLocal &&
+ adr->in(AddPNode::Offset) == macro->MakeConX(marking_offset)) {
+ macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
+ }
+ }
+ }
+ }
+ } else {
+ assert(!use_ReduceInitialCardMarks(), "can only happen with card marking");
+ // This is a G1 post barrier emitted by the Object.clone() intrinsic.
+ // Search for the CastP2X->URShiftX->AddP->LoadB->Cmp path which checks if the card
+ // is marked as young_gen and replace the Cmp with 0 (false) to collapse the barrier.
+ Node* shift = node->find_out_with(Op_URShiftX);
+ assert(shift != NULL, "missing G1 post barrier");
+ Node* addp = shift->unique_out();
+ Node* load = addp->find_out_with(Op_LoadB);
+ assert(load != NULL, "missing G1 post barrier");
+ Node* cmpx = load->unique_out();
+ assert(cmpx->is_Cmp() && cmpx->unique_out()->is_Bool() &&
+ cmpx->unique_out()->as_Bool()->_test._test == BoolTest::ne,
+ "missing card value check in G1 post barrier");
+ macro->replace_node(cmpx, macro->makecon(TypeInt::CC_EQ));
+ // There is no G1 pre barrier in this case
+ }
+ // Now CastP2X can be removed since it is used only on dead path
+ // which currently still alive until igvn optimize it.
+ assert(node->outcnt() == 0 || node->unique_out()->Opcode() == Op_URShiftX, "");
+ macro->replace_node(node, macro->top());
+ }
}
Node* G1BarrierSetC2::step_over_gc_barrier(Node* c) const {
@@ -781,6 +831,135 @@
}
#ifdef ASSERT
+bool G1BarrierSetC2::has_cas_in_use_chain(Node *n) const {
+ Unique_Node_List visited;
+ Node_List worklist;
+ worklist.push(n);
+ while (worklist.size() > 0) {
+ Node* x = worklist.pop();
+ if (visited.member(x)) {
+ continue;
+ } else {
+ visited.push(x);
+ }
+
+ if (x->is_LoadStore()) {
+ int op = x->Opcode();
+ if (op == Op_CompareAndExchangeP || op == Op_CompareAndExchangeN ||
+ op == Op_CompareAndSwapP || op == Op_CompareAndSwapN ||
+ op == Op_WeakCompareAndSwapP || op == Op_WeakCompareAndSwapN) {
+ return true;
+ }
+ }
+ if (!x->is_CFG()) {
+ for (SimpleDUIterator iter(x); iter.has_next(); iter.next()) {
+ Node* use = iter.get();
+ worklist.push(use);
+ }
+ }
+ }
+ return false;
+}
+
+void G1BarrierSetC2::verify_pre_load(Node* marking_if, Unique_Node_List& loads /*output*/) const {
+ assert(loads.size() == 0, "Loads list should be empty");
+ Node* pre_val_if = marking_if->find_out_with(Op_IfTrue)->find_out_with(Op_If);
+ if (pre_val_if != NULL) {
+ Unique_Node_List visited;
+ Node_List worklist;
+ Node* pre_val = pre_val_if->in(1)->in(1)->in(1);
+
+ worklist.push(pre_val);
+ while (worklist.size() > 0) {
+ Node* x = worklist.pop();
+ if (visited.member(x)) {
+ continue;
+ } else {
+ visited.push(x);
+ }
+
+ if (has_cas_in_use_chain(x)) {
+ loads.clear();
+ return;
+ }
+
+ if (x->is_Con()) {
+ continue;
+ }
+ if (x->is_EncodeP() || x->is_DecodeN()) {
+ worklist.push(x->in(1));
+ continue;
+ }
+ if (x->is_Load() || x->is_LoadStore()) {
+ assert(x->in(0) != NULL, "Pre-val load has to have a control");
+ loads.push(x);
+ continue;
+ }
+ if (x->is_Phi()) {
+ for (uint i = 1; i < x->req(); i++) {
+ worklist.push(x->in(i));
+ }
+ continue;
+ }
+ assert(false, "Pre-val anomaly");
+ }
+ }
+}
+
+void G1BarrierSetC2::verify_no_safepoints(Compile* compile, Node* marking_check_if, const Unique_Node_List& loads) const {
+ if (loads.size() == 0) {
+ return;
+ }
+
+ if (loads.size() == 1) { // Handle the typical situation when there a single pre-value load
+ // that is dominated by the marking_check_if, that's true when the
+ // barrier itself does the pre-val load.
+ Node *pre_val = loads.at(0);
+ if (pre_val->in(0)->in(0) == marking_check_if) { // IfTrue->If
+ return;
+ }
+ }
+
+ // All other cases are when pre-value loads dominate the marking check.
+ Unique_Node_List controls;
+ for (uint i = 0; i < loads.size(); i++) {
+ Node *c = loads.at(i)->in(0);
+ controls.push(c);
+ }
+
+ Unique_Node_List visited;
+ Unique_Node_List safepoints;
+ Node_List worklist;
+ uint found = 0;
+
+ worklist.push(marking_check_if);
+ while (worklist.size() > 0 && found < controls.size()) {
+ Node* x = worklist.pop();
+ if (x == NULL || x == compile->top()) continue;
+ if (visited.member(x)) {
+ continue;
+ } else {
+ visited.push(x);
+ }
+
+ if (controls.member(x)) {
+ found++;
+ }
+ if (x->is_Region()) {
+ for (uint i = 1; i < x->req(); i++) {
+ worklist.push(x->in(i));
+ }
+ } else {
+ if (!x->is_SafePoint()) {
+ worklist.push(x->in(0));
+ } else {
+ safepoints.push(x);
+ }
+ }
+ }
+ assert(found == controls.size(), "Pre-barrier structure anomaly or possible safepoint");
+}
+
void G1BarrierSetC2::verify_gc_barriers(Compile* compile, CompilePhase phase) const {
if (phase != BarrierSetC2::BeforeCodeGen) {
return;
@@ -830,11 +1009,15 @@
if (if_ctrl != load_ctrl) {
// Skip possible CProj->NeverBranch in infinite loops
if ((if_ctrl->is_Proj() && if_ctrl->Opcode() == Op_CProj)
- && (if_ctrl->in(0)->is_MultiBranch() && if_ctrl->in(0)->Opcode() == Op_NeverBranch)) {
+ && if_ctrl->in(0)->is_NeverBranch()) {
if_ctrl = if_ctrl->in(0)->in(0);
}
}
assert(load_ctrl != NULL && if_ctrl == load_ctrl, "controls must match");
+
+ Unique_Node_List loads;
+ verify_pre_load(iff, loads);
+ verify_no_safepoints(compile, iff, loads);
}
}
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -84,7 +84,15 @@
virtual Node* load_at_resolved(C2Access& access, const Type* val_type) const;
- public:
+#ifdef ASSERT
+ bool has_cas_in_use_chain(Node* x) const;
+ void verify_pre_load(Node* marking_check_if, Unique_Node_List& loads /*output*/) const;
+ void verify_no_safepoints(Compile* compile, Node* marking_load, const Unique_Node_List& loads) const;
+#endif
+
+ static bool is_g1_pre_val_load(Node* n);
+public:
+ virtual bool is_gc_pre_barrier_node(Node* node) const;
virtual bool is_gc_barrier_node(Node* node) const;
virtual void eliminate_gc_barrier(PhaseMacroExpand* macro, Node* node) const;
virtual Node* step_over_gc_barrier(Node* c) const;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1CollectedHeap.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1CollectedHeap.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1CollectedHeap.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -3201,6 +3201,31 @@
}
};
+// Special closure for enqueuing discovered fields: during enqueue the card table
+// may not be in shape to properly handle normal barrier calls (e.g. card marks
+// in regions that failed evacuation, scribbling of various values by card table
+// scan code). Additionally the regular barrier enqueues into the "global"
+// DCQS, but during GC we need these to-be-refined entries in the GC local queue
+// so that after clearing the card table, the redirty cards phase will properly
+// mark all dirty cards to be picked up by refinement.
+class G1EnqueueDiscoveredFieldClosure : public EnqueueDiscoveredFieldClosure {
+ G1CollectedHeap* _g1h;
+ G1ParScanThreadState* _pss;
+
+public:
+ G1EnqueueDiscoveredFieldClosure(G1CollectedHeap* g1h, G1ParScanThreadState* pss) : _g1h(g1h), _pss(pss) { }
+
+ virtual void enqueue(HeapWord* discovered_field_addr, oop value) {
+ assert(_g1h->is_in(discovered_field_addr), PTR_FORMAT " is not in heap ", p2i(discovered_field_addr));
+ // Store the value first, whatever it is.
+ RawAccess<>::oop_store(discovered_field_addr, value);
+ if (value == NULL) {
+ return;
+ }
+ _pss->write_ref_field_post(discovered_field_addr, value);
+ }
+};
+
// Serial drain queue closure. Called as the 'complete_gc'
// closure for each discovered list in some of the
// reference processing phases.
@@ -3245,7 +3270,8 @@
G1STWIsAliveClosure is_alive(&_g1h);
G1CopyingKeepAliveClosure keep_alive(&_g1h, _pss.state_for_worker(index));
G1ParEvacuateFollowersClosure complete_gc(&_g1h, _pss.state_for_worker(index), &_task_queues, _tm == RefProcThreadModel::Single ? nullptr : &_terminator, G1GCPhaseTimes::ObjCopy);
- _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &complete_gc);
+ G1EnqueueDiscoveredFieldClosure enqueue(&_g1h, _pss.state_for_worker(index));
+ _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, &complete_gc);
}
void prepare_run_task_hook() override {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1478,8 +1478,9 @@
G1CMIsAliveClosure is_alive(&_g1h);
uint index = (_tm == RefProcThreadModel::Single) ? 0 : worker_id;
G1CMKeepAliveAndDrainClosure keep_alive(&_cm, _cm.task(index), _tm == RefProcThreadModel::Single);
+ BarrierEnqueueDiscoveredFieldClosure enqueue;
G1CMDrainMarkingStackClosure complete_gc(&_cm, _cm.task(index), _tm == RefProcThreadModel::Single);
- _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &complete_gc);
+ _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, &complete_gc);
}
void prepare_run_task_hook() override {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1FullCollector.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1FullCollector.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1FullCollector.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1FullCollector.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -259,8 +259,9 @@
G1IsAliveClosure is_alive(&_collector);
uint index = (_tm == RefProcThreadModel::Single) ? 0 : worker_id;
G1FullKeepAliveClosure keep_alive(_collector.marker(index));
+ BarrierEnqueueDiscoveredFieldClosure enqueue;
G1FollowStackClosure* complete_gc = _collector.marker(index)->stack_closure();
- _rp_task->rp_work(worker_id, &is_alive, &keep_alive, complete_gc);
+ _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, complete_gc);
}
};
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -203,14 +203,7 @@
}
RawAccess::oop_store(p, obj);
- assert(obj != NULL, "Must be");
- if (HeapRegion::is_in_same_region(p, obj)) {
- return;
- }
- HeapRegion* from = _g1h->heap_region_containing(p);
- if (!from->is_young()) {
- enqueue_card_if_tracked(_g1h->region_attr(obj), p, obj);
- }
+ write_ref_field_post(p, obj);
}
MAYBE_INLINE_EVACUATION
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -128,6 +128,12 @@
void push_on_queue(ScannerTask task);
+ // Apply the post barrier to the given reference field. Enqueues the card of p
+ // if the barrier does not filter out the reference for some reason (e.g.
+ // p and q are in the same region, p is in survivor, p is in collection set)
+ // To be called during GC if nothing particular about p and obj are known.
+ template void write_ref_field_post(T* p, oop obj);
+
template void enqueue_card_if_tracked(G1HeapRegionAttr region_attr, T* p, oop o) {
assert(!HeapRegion::is_in_same_region(p, o), "Should have filtered out cross-region references already.");
assert(!_g1h->heap_region_containing(p)->is_young(), "Should have filtered out from-young references already.");
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -95,4 +95,16 @@
return &_oops_into_optional_regions[hr->index_in_opt_cset()];
}
+template void G1ParScanThreadState::write_ref_field_post(T* p, oop obj) {
+ assert(obj != NULL, "Must be");
+ if (HeapRegion::is_in_same_region(p, obj)) {
+ return;
+ }
+ HeapRegion* from = _g1h->heap_region_containing(p);
+ if (!from->is_young()) {
+ enqueue_card_if_tracked(_g1h->region_attr(obj), p, obj);
+ }
+}
+
+
#endif // SHARE_GC_G1_G1PARSCANTHREADSTATE_INLINE_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/parallel/psParallelCompact.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/parallel/psParallelCompact.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/parallel/psParallelCompact.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/parallel/psParallelCompact.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -2067,8 +2067,9 @@
assert(worker_id < _max_workers, "sanity");
ParCompactionManager* cm = (_tm == RefProcThreadModel::Single) ? ParCompactionManager::get_vmthread_cm() : ParCompactionManager::gc_thread_compaction_manager(worker_id);
PCMarkAndPushClosure keep_alive(cm);
+ BarrierEnqueueDiscoveredFieldClosure enqueue;
ParCompactionManager::FollowStackClosure complete_gc(cm, (_tm == RefProcThreadModel::Single) ? nullptr : &_terminator, worker_id);
- _rp_task->rp_work(worker_id, PSParallelCompact::is_alive_closure(), &keep_alive, &complete_gc);
+ _rp_task->rp_work(worker_id, PSParallelCompact::is_alive_closure(), &keep_alive, &enqueue, &complete_gc);
}
void prepare_run_task_hook() override {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/parallel/psScavenge.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/parallel/psScavenge.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/parallel/psScavenge.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/parallel/psScavenge.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -210,9 +210,10 @@
assert(worker_id < _max_workers, "sanity");
PSPromotionManager* promotion_manager = (_tm == RefProcThreadModel::Single) ? PSPromotionManager::vm_thread_promotion_manager() : PSPromotionManager::gc_thread_promotion_manager(worker_id);
PSIsAliveClosure is_alive;
- PSKeepAliveClosure keep_alive(promotion_manager);;
+ PSKeepAliveClosure keep_alive(promotion_manager);
+ BarrierEnqueueDiscoveredFieldClosure enqueue;
PSEvacuateFollowersClosure complete_gc(promotion_manager, (_marks_oops_alive && _tm == RefProcThreadModel::Multi) ? &_terminator : nullptr, worker_id);;
- _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &complete_gc);
+ _rp_task->rp_work(worker_id, &is_alive, &keep_alive, &enqueue, &complete_gc);
}
void prepare_run_task_hook() override {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/serial/serialGcRefProcProxyTask.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/serial/serialGcRefProcProxyTask.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/serial/serialGcRefProcProxyTask.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/serial/serialGcRefProcProxyTask.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -41,7 +41,8 @@
void work(uint worker_id) override {
assert(worker_id < _max_workers, "sanity");
- _rp_task->rp_work(worker_id, &_is_alive, &_keep_alive, &_complete_gc);
+ BarrierEnqueueDiscoveredFieldClosure enqueue;
+ _rp_task->rp_work(worker_id, &_is_alive, &_keep_alive, &enqueue, &_complete_gc);
}
};
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/c2/barrierSetC2.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/c2/barrierSetC2.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/c2/barrierSetC2.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/c2/barrierSetC2.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -259,6 +259,7 @@
// Support for GC barriers emitted during parsing
virtual bool has_load_barrier_nodes() const { return false; }
+ virtual bool is_gc_pre_barrier_node(Node* node) const { return false; }
virtual bool is_gc_barrier_node(Node* node) const { return false; }
virtual Node* step_over_gc_barrier(Node* c) const { return c; }
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -245,6 +245,12 @@
return stats;
}
+void BarrierEnqueueDiscoveredFieldClosure::enqueue(HeapWord* discovered_field_addr, oop value) {
+ assert(Universe::heap()->is_in(discovered_field_addr), PTR_FORMAT " not in heap", p2i(discovered_field_addr));
+ HeapAccess::oop_store(discovered_field_addr,
+ value);
+}
+
void DiscoveredListIterator::load_ptrs(DEBUG_ONLY(bool allow_null_referent)) {
_current_discovered_addr = java_lang_ref_Reference::discovered_addr_raw(_current_discovered);
oop discovered = java_lang_ref_Reference::discovered(_current_discovered);
@@ -304,12 +310,12 @@
}
void DiscoveredListIterator::complete_enqueue() {
- if (_prev_discovered != NULL) {
+ if (_prev_discovered != nullptr) {
// This is the last object.
// Swap refs_list into pending list and set obj's
// discovered to what we read from the pending list.
oop old = Universe::swap_reference_pending_list(_refs_list.head());
- HeapAccess::oop_store_at(_prev_discovered, java_lang_ref_Reference::discovered_offset(), old);
+ _enqueue->enqueue(java_lang_ref_Reference::discovered_addr_raw(_prev_discovered), old);
}
}
@@ -337,7 +343,7 @@
OopClosure* keep_alive,
VoidClosure* complete_gc) {
assert(policy != NULL, "Must have a non-NULL policy");
- DiscoveredListIterator iter(refs_list, keep_alive, is_alive);
+ DiscoveredListIterator iter(refs_list, keep_alive, is_alive, NULL /* enqueue */);
// Decide which softly reachable refs should be kept alive.
while (iter.has_next()) {
iter.load_ptrs(DEBUG_ONLY(!discovery_is_atomic() /* allow_null_referent */));
@@ -365,8 +371,9 @@
size_t ReferenceProcessor::process_soft_weak_final_refs_work(DiscoveredList& refs_list,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
bool do_enqueue_and_clear) {
- DiscoveredListIterator iter(refs_list, keep_alive, is_alive);
+ DiscoveredListIterator iter(refs_list, keep_alive, is_alive, enqueue);
while (iter.has_next()) {
iter.load_ptrs(DEBUG_ONLY(!discovery_is_atomic() /* allow_null_referent */));
if (iter.referent() == NULL) {
@@ -409,8 +416,9 @@
size_t ReferenceProcessor::process_final_keep_alive_work(DiscoveredList& refs_list,
OopClosure* keep_alive,
- VoidClosure* complete_gc) {
- DiscoveredListIterator iter(refs_list, keep_alive, NULL);
+ VoidClosure* complete_gc,
+ EnqueueDiscoveredFieldClosure* enqueue) {
+ DiscoveredListIterator iter(refs_list, keep_alive, NULL, enqueue);
while (iter.has_next()) {
iter.load_ptrs(DEBUG_ONLY(false /* allow_null_referent */));
// keep the referent and followers around
@@ -436,8 +444,9 @@
size_t ReferenceProcessor::process_phantom_refs_work(DiscoveredList& refs_list,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
- VoidClosure* complete_gc) {
- DiscoveredListIterator iter(refs_list, keep_alive, is_alive);
+ VoidClosure* complete_gc,
+ EnqueueDiscoveredFieldClosure* enqueue) {
+ DiscoveredListIterator iter(refs_list, keep_alive, is_alive, enqueue);
while (iter.has_next()) {
iter.load_ptrs(DEBUG_ONLY(!discovery_is_atomic() /* allow_null_referent */));
@@ -509,8 +518,6 @@
return total_count(list);
}
-
-
class RefProcPhase1Task : public RefProcTask {
public:
RefProcPhase1Task(ReferenceProcessor& ref_processor,
@@ -523,6 +530,7 @@
void rp_work(uint worker_id,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
VoidClosure* complete_gc) override {
ResourceMark rm;
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::SoftRefSubPhase1, _phase_times, worker_id);
@@ -543,11 +551,13 @@
DiscoveredList list[],
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
bool do_enqueue_and_clear,
ReferenceType ref_type) {
size_t const removed = _ref_processor.process_soft_weak_final_refs_work(list[worker_id],
is_alive,
keep_alive,
+ enqueue,
do_enqueue_and_clear);
_phase_times->add_ref_cleared(ref_type, removed);
}
@@ -561,20 +571,21 @@
void rp_work(uint worker_id,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
VoidClosure* complete_gc) override {
ResourceMark rm;
RefProcWorkerTimeTracker t(_phase_times->phase2_worker_time_sec(), worker_id);
{
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::SoftRefSubPhase2, _phase_times, worker_id);
- run_phase2(worker_id, _ref_processor._discoveredSoftRefs, is_alive, keep_alive, true /* do_enqueue_and_clear */, REF_SOFT);
+ run_phase2(worker_id, _ref_processor._discoveredSoftRefs, is_alive, keep_alive, enqueue, true /* do_enqueue_and_clear */, REF_SOFT);
}
{
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::WeakRefSubPhase2, _phase_times, worker_id);
- run_phase2(worker_id, _ref_processor._discoveredWeakRefs, is_alive, keep_alive, true /* do_enqueue_and_clear */, REF_WEAK);
+ run_phase2(worker_id, _ref_processor._discoveredWeakRefs, is_alive, keep_alive, enqueue, true /* do_enqueue_and_clear */, REF_WEAK);
}
{
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::FinalRefSubPhase2, _phase_times, worker_id);
- run_phase2(worker_id, _ref_processor._discoveredFinalRefs, is_alive, keep_alive, false /* do_enqueue_and_clear */, REF_FINAL);
+ run_phase2(worker_id, _ref_processor._discoveredFinalRefs, is_alive, keep_alive, enqueue, false /* do_enqueue_and_clear */, REF_FINAL);
}
// Close the reachable set; needed for collectors which keep_alive_closure do
// not immediately complete their work.
@@ -592,10 +603,11 @@
void rp_work(uint worker_id,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
VoidClosure* complete_gc) override {
ResourceMark rm;
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::FinalRefSubPhase3, _phase_times, worker_id);
- _ref_processor.process_final_keep_alive_work(_ref_processor._discoveredFinalRefs[worker_id], keep_alive, complete_gc);
+ _ref_processor.process_final_keep_alive_work(_ref_processor._discoveredFinalRefs[worker_id], keep_alive, complete_gc, enqueue);
}
};
@@ -609,13 +621,15 @@
void rp_work(uint worker_id,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
VoidClosure* complete_gc) override {
ResourceMark rm;
RefProcSubPhasesWorkerTimeTracker tt(ReferenceProcessor::PhantomRefSubPhase4, _phase_times, worker_id);
size_t const removed = _ref_processor.process_phantom_refs_work(_ref_processor._discoveredPhantomRefs[worker_id],
is_alive,
keep_alive,
- complete_gc);
+ complete_gc,
+ enqueue);
_phase_times->add_ref_cleared(REF_PHANTOM, removed);
}
};
@@ -965,34 +979,61 @@
return list;
}
-inline void
-ReferenceProcessor::add_to_discovered_list_mt(DiscoveredList& refs_list,
- oop obj,
- HeapWord* discovered_addr) {
- assert(_discovery_is_mt, "!_discovery_is_mt should have been handled by caller");
- // First we must make sure this object is only enqueued once. CAS in a non null
- // discovered_addr.
+inline bool ReferenceProcessor::set_discovered_link(HeapWord* discovered_addr, oop next_discovered) {
+ return discovery_is_mt() ? set_discovered_link_mt(discovered_addr, next_discovered)
+ : set_discovered_link_st(discovered_addr, next_discovered);
+}
+
+inline void ReferenceProcessor::add_to_discovered_list(DiscoveredList& refs_list,
+ oop obj,
+ HeapWord* discovered_addr) {
oop current_head = refs_list.head();
- // The last ref must have its discovered field pointing to itself.
+ // Prepare value to put into the discovered field. The last ref must have its
+ // discovered field pointing to itself.
oop next_discovered = (current_head != NULL) ? current_head : obj;
- oop retest = HeapAccess::oop_atomic_cmpxchg(discovered_addr, oop(NULL), next_discovered);
+ bool added = set_discovered_link(discovered_addr, next_discovered);
+ if (added) {
+ // We can always add the object without synchronization: every thread has its
+ // own list head.
+ refs_list.add_as_head(obj);
+ log_develop_trace(gc, ref)("Discovered reference (%s) (" INTPTR_FORMAT ": %s)",
+ discovery_is_mt() ? "mt" : "st", p2i(obj), obj->klass()->internal_name());
+ } else {
+ log_develop_trace(gc, ref)("Already discovered reference (mt) (" INTPTR_FORMAT ": %s)",
+ p2i(obj), obj->klass()->internal_name());
+ }
+}
- if (retest == NULL) {
- // This thread just won the right to enqueue the object.
- // We have separate lists for enqueueing, so no synchronization
- // is necessary.
- refs_list.set_head(obj);
- refs_list.inc_length(1);
+inline bool ReferenceProcessor::set_discovered_link_st(HeapWord* discovered_addr,
+ oop next_discovered) {
+ assert(!discovery_is_mt(), "must be");
- log_develop_trace(gc, ref)("Discovered reference (mt) (" INTPTR_FORMAT ": %s)",
- p2i(obj), obj->klass()->internal_name());
+ if (discovery_is_atomic()) {
+ // Do a raw store here: the field will be visited later when processing
+ // the discovered references.
+ RawAccess<>::oop_store(discovered_addr, next_discovered);
} else {
- // If retest was non NULL, another thread beat us to it:
- // The reference has already been discovered...
- log_develop_trace(gc, ref)("Already discovered reference (" INTPTR_FORMAT ": %s)",
- p2i(obj), obj->klass()->internal_name());
+ HeapAccess::oop_store(discovered_addr, next_discovered);
+ }
+ // Always successful.
+ return true;
+}
+
+inline bool ReferenceProcessor::set_discovered_link_mt(HeapWord* discovered_addr,
+ oop next_discovered) {
+ assert(discovery_is_mt(), "must be");
+
+ // We must make sure this object is only enqueued once. Try to CAS into the discovered_addr.
+ oop retest;
+ if (discovery_is_atomic()) {
+ // Try a raw store here, still making sure that we enqueue only once: the field
+ // will be visited later when processing the discovered references.
+ retest = RawAccess<>::oop_atomic_cmpxchg(discovered_addr, oop(NULL), next_discovered);
+ } else {
+ retest = HeapAccess::oop_atomic_cmpxchg(discovered_addr, oop(NULL), next_discovered);
}
+ return retest == NULL;
}
#ifndef PRODUCT
@@ -1127,22 +1168,8 @@
return false; // nothing special needs to be done
}
- if (_discovery_is_mt) {
- add_to_discovered_list_mt(*list, obj, discovered_addr);
- } else {
- // We do a raw store here: the field will be visited later when processing
- // the discovered references.
- oop current_head = list->head();
- // The last ref must have its discovered field pointing to itself.
- oop next_discovered = (current_head != NULL) ? current_head : obj;
+ add_to_discovered_list(*list, obj, discovered_addr);
- assert(discovered == NULL, "control point invariant");
- RawAccess<>::oop_store(discovered_addr, next_discovered);
- list->set_head(obj);
- list->inc_length(1);
-
- log_develop_trace(gc, ref)("Discovered reference (" INTPTR_FORMAT ": %s)", p2i(obj), obj->klass()->internal_name());
- }
assert(oopDesc::is_oop(obj), "Discovered a bad reference");
verify_referent(obj);
return true;
@@ -1246,7 +1273,7 @@
OopClosure* keep_alive,
VoidClosure* complete_gc,
YieldClosure* yield) {
- DiscoveredListIterator iter(refs_list, keep_alive, is_alive);
+ DiscoveredListIterator iter(refs_list, keep_alive, is_alive, NULL /* enqueue */);
while (iter.has_next()) {
if (yield->should_return_fine_grain()) {
return true;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -38,6 +38,28 @@
class RefProcTask;
class RefProcProxyTask;
+// Provides a callback to the garbage collector to set the given value to the
+// discovered field of the j.l.ref.Reference instance. This is called during STW
+// reference processing when iterating over the discovered lists for all
+// discovered references.
+// Typically garbage collectors may just call the barrier, but for some garbage
+// collectors the barrier environment (e.g. card table) may not be set up correctly
+// at the point of invocation.
+class EnqueueDiscoveredFieldClosure {
+public:
+ // For the given j.l.ref.Reference discovered field address, set the discovered
+ // field to value and apply any barriers to it.
+ virtual void enqueue(HeapWord* discovered_field_addr, oop value) = 0;
+
+};
+
+// EnqueueDiscoveredFieldClosure that executes the default barrier on the discovered
+// field of the j.l.ref.Reference with the given value.
+class BarrierEnqueueDiscoveredFieldClosure : public EnqueueDiscoveredFieldClosure {
+public:
+ void enqueue(HeapWord* discovered_field_addr, oop value) override;
+};
+
// List of discovered references.
class DiscoveredList {
public:
@@ -47,6 +69,7 @@
return UseCompressedOops ? (HeapWord*)&_compressed_head :
(HeapWord*)&_oop_head;
}
+ inline void add_as_head(oop o);
inline void set_head(oop o);
inline bool is_empty() const;
size_t length() { return _len; }
@@ -65,7 +88,6 @@
// Iterator for the list of discovered references.
class DiscoveredListIterator {
-private:
DiscoveredList& _refs_list;
HeapWord* _prev_discovered_addr;
oop _prev_discovered;
@@ -77,6 +99,7 @@
OopClosure* _keep_alive;
BoolObjectClosure* _is_alive;
+ EnqueueDiscoveredFieldClosure* _enqueue;
DEBUG_ONLY(
oop _first_seen; // cyclic linked list check
@@ -88,7 +111,8 @@
public:
inline DiscoveredListIterator(DiscoveredList& refs_list,
OopClosure* keep_alive,
- BoolObjectClosure* is_alive);
+ BoolObjectClosure* is_alive,
+ EnqueueDiscoveredFieldClosure* enqueue);
// End Of List.
inline bool has_next() const { return _current_discovered != NULL; }
@@ -273,18 +297,21 @@
size_t process_soft_weak_final_refs_work(DiscoveredList& refs_list,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
bool do_enqueue_and_clear);
// Keep alive followers of referents for FinalReferences. Must only be called for
// those.
size_t process_final_keep_alive_work(DiscoveredList& refs_list,
OopClosure* keep_alive,
- VoidClosure* complete_gc);
+ VoidClosure* complete_gc,
+ EnqueueDiscoveredFieldClosure* enqueue);
size_t process_phantom_refs_work(DiscoveredList& refs_list,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
- VoidClosure* complete_gc);
+ VoidClosure* complete_gc,
+ EnqueueDiscoveredFieldClosure* enqueue);
public:
static int number_of_subclasses_of_ref() { return (REF_PHANTOM - REF_OTHER); }
@@ -341,8 +368,13 @@
return id;
}
DiscoveredList* get_discovered_list(ReferenceType rt);
- inline void add_to_discovered_list_mt(DiscoveredList& refs_list, oop obj,
- HeapWord* discovered_addr);
+ inline bool set_discovered_link(HeapWord* discovered_addr, oop next_discovered);
+ inline void add_to_discovered_list(DiscoveredList& refs_list, oop obj,
+ HeapWord* discovered_addr);
+ inline bool set_discovered_link_st(HeapWord* discovered_addr,
+ oop next_discovered);
+ inline bool set_discovered_link_mt(HeapWord* discovered_addr,
+ oop next_discovered);
void clear_discovered_references(DiscoveredList& refs_list);
@@ -604,6 +636,7 @@
virtual void rp_work(uint worker_id,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
+ EnqueueDiscoveredFieldClosure* enqueue,
VoidClosure* complete_gc) = 0;
};
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.inline.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.inline.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shared/referenceProcessor.inline.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shared/referenceProcessor.inline.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -35,6 +35,11 @@
_oop_head;
}
+void DiscoveredList::add_as_head(oop o) {
+ set_head(o);
+ inc_length(1);
+}
+
void DiscoveredList::set_head(oop o) {
if (UseCompressedOops) {
// Must compress the head ptr.
@@ -55,7 +60,8 @@
DiscoveredListIterator::DiscoveredListIterator(DiscoveredList& refs_list,
OopClosure* keep_alive,
- BoolObjectClosure* is_alive):
+ BoolObjectClosure* is_alive,
+ EnqueueDiscoveredFieldClosure* enqueue):
_refs_list(refs_list),
_prev_discovered_addr(refs_list.adr_head()),
_prev_discovered(NULL),
@@ -65,6 +71,7 @@
_referent(NULL),
_keep_alive(keep_alive),
_is_alive(is_alive),
+ _enqueue(enqueue),
#ifdef ASSERT
_first_seen(refs_list.head()),
#endif
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -716,6 +716,11 @@
return result;
}
+
+bool ShenandoahBarrierSetC2::is_gc_pre_barrier_node(Node* node) const {
+ return is_shenandoah_wb_pre_call(node);
+}
+
// Support for GC barriers emitted during parsing
bool ShenandoahBarrierSetC2::is_gc_barrier_node(Node* node) const {
if (node->Opcode() == Op_ShenandoahLoadReferenceBarrier) return true;
@@ -1023,7 +1028,7 @@
if (if_ctrl != load_ctrl) {
// Skip possible CProj->NeverBranch in infinite loops
if ((if_ctrl->is_Proj() && if_ctrl->Opcode() == Op_CProj)
- && (if_ctrl->in(0)->is_MultiBranch() && if_ctrl->in(0)->Opcode() == Op_NeverBranch)) {
+ && if_ctrl->in(0)->is_NeverBranch()) {
if_ctrl = if_ctrl->in(0)->in(0);
}
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.hpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahBarrierSetC2.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -112,6 +112,7 @@
virtual bool array_copy_requires_gc_barriers(bool tightly_coupled_alloc, BasicType type, bool is_clone, bool is_clone_instance, ArrayCopyPhase phase) const;
// Support for GC barriers emitted during parsing
+ virtual bool is_gc_pre_barrier_node(Node* node) const;
virtual bool is_gc_barrier_node(Node* node) const;
virtual Node* step_over_gc_barrier(Node* c) const;
virtual bool expand_barriers(Compile* C, PhaseIterGVN& igvn) const;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -757,7 +757,7 @@
return NodeSentinel; // unsupported
} else if (c->Opcode() == Op_CatchProj) {
return NodeSentinel; // unsupported
- } else if (c->Opcode() == Op_CProj && next->Opcode() == Op_NeverBranch) {
+ } else if (c->Opcode() == Op_CProj && next->is_NeverBranch()) {
return NodeSentinel; // unsupported
} else {
assert(next->unique_ctrl_out() == c, "unsupported branch pattern");
@@ -2125,7 +2125,7 @@
static bool has_never_branch(Node* root) {
for (uint i = 1; i < root->req(); i++) {
Node* in = root->in(i);
- if (in != NULL && in->Opcode() == Op_Halt && in->in(0)->is_Proj() && in->in(0)->in(0)->Opcode() == Op_NeverBranch) {
+ if (in != NULL && in->Opcode() == Op_Halt && in->in(0)->is_Proj() && in->in(0)->in(0)->is_NeverBranch()) {
return true;
}
}
@@ -2156,20 +2156,20 @@
if (in->in(0)->is_Region()) {
Node* r = in->in(0);
for (uint j = 1; j < r->req(); j++) {
- assert(r->in(j)->Opcode() != Op_NeverBranch, "");
+ assert(!r->in(j)->is_NeverBranch(), "");
}
} else {
Node* proj = in->in(0);
assert(proj->is_Proj(), "");
Node* in = proj->in(0);
- assert(in->is_CallStaticJava() || in->Opcode() == Op_NeverBranch || in->Opcode() == Op_Catch || proj->is_IfProj(), "");
+ assert(in->is_CallStaticJava() || in->is_NeverBranch() || in->Opcode() == Op_Catch || proj->is_IfProj(), "");
if (in->is_CallStaticJava()) {
mem = in->in(TypeFunc::Memory);
} else if (in->Opcode() == Op_Catch) {
Node* call = in->in(0)->in(0);
assert(call->is_Call(), "");
mem = call->in(TypeFunc::Memory);
- } else if (in->Opcode() == Op_NeverBranch) {
+ } else if (in->is_NeverBranch()) {
mem = collect_memory_for_infinite_loop(in);
}
}
@@ -2641,7 +2641,7 @@
}
}
} else if (!mem_is_valid(m, u) &&
- !(u->Opcode() == Op_CProj && u->in(0)->Opcode() == Op_NeverBranch && u->as_Proj()->_con == 1)) {
+ !(u->Opcode() == Op_CProj && u->in(0)->is_NeverBranch() && u->as_Proj()->_con == 1)) {
uses.push(u);
}
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/gc/z/zPhysicalMemory.cpp openjdk-17-17.0.7+7/src/hotspot/share/gc/z/zPhysicalMemory.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/gc/z/zPhysicalMemory.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/gc/z/zPhysicalMemory.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -282,7 +282,7 @@
}
void ZPhysicalMemoryManager::nmt_uncommit(uintptr_t offset, size_t size) const {
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
const uintptr_t addr = ZAddress::marked0(offset);
Tracker tracker(Tracker::uncommit);
tracker.record((address)addr, size);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp openjdk-17-17.0.7+7/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -293,6 +293,8 @@
istate->set_bcp(pc+opsize); \
return;
+#define REWRITE_AT_PC(val) \
+ *pc = val;
#define METHOD istate->method()
#define GET_METHOD_COUNTERS(res)
@@ -389,6 +391,81 @@
if (THREAD->has_pending_exception()) goto label; \
}
+#define MAYBE_POST_FIELD_ACCESS(obj) { \
+ if (JVMTI_ENABLED) { \
+ int* count_addr; \
+ /* Check to see if a field modification watch has been set */ \
+ /* before we take the time to call into the VM. */ \
+ count_addr = (int*)JvmtiExport::get_field_access_count_addr(); \
+ if (*count_addr > 0) { \
+ oop target; \
+ if ((Bytecodes::Code)opcode == Bytecodes::_getstatic) { \
+ target = NULL; \
+ } else { \
+ target = obj; \
+ } \
+ CALL_VM(InterpreterRuntime::post_field_access(THREAD, \
+ target, cache), \
+ handle_exception); \
+ } \
+ } \
+}
+
+#define MAYBE_POST_FIELD_MODIFICATION(obj) { \
+ if (JVMTI_ENABLED) { \
+ int* count_addr; \
+ /* Check to see if a field modification watch has been set */ \
+ /* before we take the time to call into the VM. */ \
+ count_addr = (int*)JvmtiExport::get_field_modification_count_addr(); \
+ if (*count_addr > 0) { \
+ oop target; \
+ if ((Bytecodes::Code)opcode == Bytecodes::_putstatic) { \
+ target = NULL; \
+ } else { \
+ target = obj; \
+ } \
+ CALL_VM(InterpreterRuntime::post_field_modification(THREAD, \
+ target, cache, \
+ (jvalue*)STACK_SLOT(-1)), \
+ handle_exception); \
+ } \
+ } \
+}
+
+static inline int fast_get_type(TosState tos) {
+ switch (tos) {
+ case ztos:
+ case btos: return Bytecodes::_fast_bgetfield;
+ case ctos: return Bytecodes::_fast_cgetfield;
+ case stos: return Bytecodes::_fast_sgetfield;
+ case itos: return Bytecodes::_fast_igetfield;
+ case ltos: return Bytecodes::_fast_lgetfield;
+ case ftos: return Bytecodes::_fast_fgetfield;
+ case dtos: return Bytecodes::_fast_dgetfield;
+ case atos: return Bytecodes::_fast_agetfield;
+ default:
+ ShouldNotReachHere();
+ return -1;
+ }
+}
+
+static inline int fast_put_type(TosState tos) {
+ switch (tos) {
+ case ztos: return Bytecodes::_fast_zputfield;
+ case btos: return Bytecodes::_fast_bputfield;
+ case ctos: return Bytecodes::_fast_cputfield;
+ case stos: return Bytecodes::_fast_sputfield;
+ case itos: return Bytecodes::_fast_iputfield;
+ case ltos: return Bytecodes::_fast_lputfield;
+ case ftos: return Bytecodes::_fast_fputfield;
+ case dtos: return Bytecodes::_fast_dputfield;
+ case atos: return Bytecodes::_fast_aputfield;
+ default:
+ ShouldNotReachHere();
+ return -1;
+ }
+}
+
/*
* BytecodeInterpreter::run(interpreterState istate)
*
@@ -397,11 +474,13 @@
* the method passed in.
*/
-// Instantiate two variants of the method for future linking.
-template void BytecodeInterpreter::run(interpreterState istate);
-template void BytecodeInterpreter::run(interpreterState istate);
+// Instantiate variants of the method for future linking.
+template void BytecodeInterpreter::run(interpreterState istate);
+template void BytecodeInterpreter::run(interpreterState istate);
+template void BytecodeInterpreter::run< true, false>(interpreterState istate);
+template void BytecodeInterpreter::run< true, true>(interpreterState istate);
-template
+template
void BytecodeInterpreter::run(interpreterState istate) {
intptr_t* topOfStack = (intptr_t *)istate->stack(); /* access with STACK macros */
address pc = istate->bcp();
@@ -497,15 +576,15 @@
/* 0xC0 */ &&opc_checkcast, &&opc_instanceof, &&opc_monitorenter, &&opc_monitorexit,
/* 0xC4 */ &&opc_wide, &&opc_multianewarray, &&opc_ifnull, &&opc_ifnonnull,
-/* 0xC8 */ &&opc_goto_w, &&opc_jsr_w, &&opc_breakpoint, &&opc_default,
-/* 0xCC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xC8 */ &&opc_goto_w, &&opc_jsr_w, &&opc_breakpoint, &&opc_fast_agetfield,
+/* 0xCC */ &&opc_fast_bgetfield,&&opc_fast_cgetfield, &&opc_fast_dgetfield, &&opc_fast_fgetfield,
-/* 0xD0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xD4 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xD8 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xDC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xD0 */ &&opc_fast_igetfield,&&opc_fast_lgetfield, &&opc_fast_sgetfield, &&opc_fast_aputfield,
+/* 0xD4 */ &&opc_fast_bputfield,&&opc_fast_zputfield, &&opc_fast_cputfield, &&opc_fast_dputfield,
+/* 0xD8 */ &&opc_fast_fputfield,&&opc_fast_iputfield, &&opc_fast_lputfield, &&opc_fast_sputfield,
+/* 0xDC */ &&opc_fast_aload_0, &&opc_fast_iaccess_0, &&opc_fast_aaccess_0, &&opc_fast_faccess_0,
-/* 0xE0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xE0 */ &&opc_fast_iload, &&opc_fast_iload2, &&opc_fast_icaload, &&opc_fast_invokevfinal,
/* 0xE4 */ &&opc_default, &&opc_default, &&opc_fast_aldc, &&opc_fast_aldc_w,
/* 0xE8 */ &&opc_return_register_finalizer,
&&opc_invokehandle, &&opc_default, &&opc_default,
@@ -752,10 +831,41 @@
UPDATE_PC_AND_TOS_AND_CONTINUE(2, 1);
CASE(_iload):
+ {
+ if (REWRITE_BYTECODES) {
+ // Attempt to rewrite iload, iload -> fast_iload2
+ // iload, caload -> fast_icaload
+ // Normal iloads will be rewritten to fast_iload to avoid checking again.
+ switch (*(pc + 2)) {
+ case Bytecodes::_fast_iload:
+ REWRITE_AT_PC(Bytecodes::_fast_iload2);
+ break;
+ case Bytecodes::_caload:
+ REWRITE_AT_PC(Bytecodes::_fast_icaload);
+ break;
+ case Bytecodes::_iload:
+ // Wait until rewritten to _fast_iload.
+ break;
+ default:
+ // Last iload in a (potential) series, don't check again.
+ REWRITE_AT_PC(Bytecodes::_fast_iload);
+ }
+ }
+ // Normal iload handling.
+ SET_STACK_SLOT(LOCALS_SLOT(pc[1]), 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(2, 1);
+ }
+
+ CASE(_fast_iload):
CASE(_fload):
SET_STACK_SLOT(LOCALS_SLOT(pc[1]), 0);
UPDATE_PC_AND_TOS_AND_CONTINUE(2, 1);
+ CASE(_fast_iload2):
+ SET_STACK_SLOT(LOCALS_SLOT(pc[1]), 0);
+ SET_STACK_SLOT(LOCALS_SLOT(pc[3]), 1);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(4, 2);
+
CASE(_lload):
SET_STACK_LONG_FROM_ADDR(LOCALS_LONG_AT(pc[1]), 1);
UPDATE_PC_AND_TOS_AND_CONTINUE(2, 2);
@@ -766,11 +876,6 @@
#undef OPC_LOAD_n
#define OPC_LOAD_n(num) \
- CASE(_aload_##num): \
- VERIFY_OOP(LOCALS_OBJECT(num)); \
- SET_STACK_OBJECT(LOCALS_OBJECT(num), 0); \
- UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1); \
- \
CASE(_iload_##num): \
CASE(_fload_##num): \
SET_STACK_SLOT(LOCALS_SLOT(num), 0); \
@@ -783,10 +888,53 @@
SET_STACK_DOUBLE_FROM_ADDR(LOCALS_DOUBLE_AT(num), 1); \
UPDATE_PC_AND_TOS_AND_CONTINUE(1, 2);
- OPC_LOAD_n(0);
- OPC_LOAD_n(1);
- OPC_LOAD_n(2);
- OPC_LOAD_n(3);
+ OPC_LOAD_n(0);
+ OPC_LOAD_n(1);
+ OPC_LOAD_n(2);
+ OPC_LOAD_n(3);
+
+#undef OPC_ALOAD_n
+#define OPC_ALOAD_n(num) \
+ CASE(_aload_##num): { \
+ oop obj = LOCALS_OBJECT(num); \
+ VERIFY_OOP(obj); \
+ SET_STACK_OBJECT(obj, 0); \
+ UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1); \
+ }
+
+ CASE(_aload_0):
+ {
+ /* Maybe rewrite if following bytecode is one of the supported _fast_Xgetfield bytecodes. */
+ if (REWRITE_BYTECODES) {
+ switch (*(pc + 1)) {
+ case Bytecodes::_fast_agetfield:
+ REWRITE_AT_PC(Bytecodes::_fast_aaccess_0);
+ break;
+ case Bytecodes::_fast_fgetfield:
+ REWRITE_AT_PC(Bytecodes::_fast_faccess_0);
+ break;
+ case Bytecodes::_fast_igetfield:
+ REWRITE_AT_PC(Bytecodes::_fast_iaccess_0);
+ break;
+ case Bytecodes::_getfield: {
+ /* Otherwise, do nothing here, wait until it gets rewritten to _fast_Xgetfield.
+ * Unfortunately, this punishes volatile field access, because it never gets
+ * rewritten. */
+ break;
+ }
+ default:
+ REWRITE_AT_PC(Bytecodes::_fast_aload_0);
+ break;
+ }
+ }
+ VERIFY_OOP(LOCALS_OBJECT(0));
+ SET_STACK_OBJECT(LOCALS_OBJECT(0), 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1);
+ }
+
+ OPC_ALOAD_n(1);
+ OPC_ALOAD_n(2);
+ OPC_ALOAD_n(3);
/* store to a local variable */
@@ -1318,11 +1466,7 @@
/* Array access byte-codes */
- /* Every array access byte-code starts out like this */
-// arrayOopDesc* arrObj = (arrayOopDesc*)STACK_OBJECT(arrayOff);
-#define ARRAY_INTRO(arrayOff) \
- arrayOop arrObj = (arrayOop)STACK_OBJECT(arrayOff); \
- jint index = STACK_INT(arrayOff + 1); \
+#define ARRAY_INDEX_CHECK(arrObj, index) \
/* Two integers, the additional message, and the null-terminator */ \
char message[2 * jintAsStringSize + 33]; \
CHECK_NULL(arrObj); \
@@ -1334,6 +1478,13 @@
message); \
}
+ /* Every array access byte-code starts out like this */
+// arrayOopDesc* arrObj = (arrayOopDesc*)STACK_OBJECT(arrayOff);
+#define ARRAY_INTRO(arrayOff) \
+ arrayOop arrObj = (arrayOop)STACK_OBJECT(arrayOff); \
+ jint index = STACK_INT(arrayOff + 1); \
+ ARRAY_INDEX_CHECK(arrObj, index)
+
/* 32-bit loads. These handle conversion from < 32-bit types */
#define ARRAY_LOADTO32(T, T2, format, stackRes, extra) \
{ \
@@ -1373,6 +1524,15 @@
CASE(_daload):
ARRAY_LOADTO64(T_DOUBLE, jdouble, STACK_DOUBLE, 0);
+ CASE(_fast_icaload): {
+ // Custom fast access for iload,caload pair.
+ arrayOop arrObj = (arrayOop) STACK_OBJECT(-1);
+ jint index = LOCALS_INT(pc[1]);
+ ARRAY_INDEX_CHECK(arrObj, index);
+ SET_STACK_INT(*(jchar *)(((address) arrObj->base(T_CHAR)) + index * sizeof(jchar)), -1);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, 0);
+ }
+
/* 32-bit stores. These handle conversion to < 32-bit types */
#define ARRAY_STOREFROM32(T, T2, format, stackSrc, extra) \
{ \
@@ -1546,26 +1706,6 @@
cache = cp->entry_at(index);
}
- if (JVMTI_ENABLED) {
- int *count_addr;
- oop obj;
- // Check to see if a field modification watch has been set
- // before we take the time to call into the VM.
- count_addr = (int *)JvmtiExport::get_field_access_count_addr();
- if ( *count_addr > 0 ) {
- if ((Bytecodes::Code)opcode == Bytecodes::_getstatic) {
- obj = NULL;
- } else {
- obj = STACK_OBJECT(-1);
- VERIFY_OOP(obj);
- }
- CALL_VM(InterpreterRuntime::post_field_access(THREAD,
- obj,
- cache),
- handle_exception);
- }
- }
-
oop obj;
if ((Bytecodes::Code)opcode == Bytecodes::_getstatic) {
Klass* k = cache->f1_as_klass();
@@ -1574,8 +1714,15 @@
} else {
obj = STACK_OBJECT(-1);
CHECK_NULL(obj);
+ // Check if we can rewrite non-volatile _getfield to one of the _fast_Xgetfield.
+ if (REWRITE_BYTECODES && !cache->is_volatile()) {
+ // Rewrite current BC to _fast_Xgetfield.
+ REWRITE_AT_PC(fast_get_type(cache->flag_state()));
+ }
}
+ MAYBE_POST_FIELD_ACCESS(obj);
+
//
// Now store the result on the stack
//
@@ -1670,33 +1817,6 @@
cache = cp->entry_at(index);
}
- if (JVMTI_ENABLED) {
- int *count_addr;
- oop obj;
- // Check to see if a field modification watch has been set
- // before we take the time to call into the VM.
- count_addr = (int *)JvmtiExport::get_field_modification_count_addr();
- if ( *count_addr > 0 ) {
- if ((Bytecodes::Code)opcode == Bytecodes::_putstatic) {
- obj = NULL;
- }
- else {
- if (cache->is_long() || cache->is_double()) {
- obj = STACK_OBJECT(-3);
- } else {
- obj = STACK_OBJECT(-2);
- }
- VERIFY_OOP(obj);
- }
-
- CALL_VM(InterpreterRuntime::post_field_modification(THREAD,
- obj,
- cache,
- (jvalue *)STACK_SLOT(-1)),
- handle_exception);
- }
- }
-
// QQQ Need to make this as inlined as possible. Probably need to split all the bytecode cases
// out so c++ compiler has a chance for constant prop to fold everything possible away.
@@ -1715,8 +1835,16 @@
--count;
obj = STACK_OBJECT(count);
CHECK_NULL(obj);
+
+ // Check if we can rewrite non-volatile _putfield to one of the _fast_Xputfield.
+ if (REWRITE_BYTECODES && !cache->is_volatile()) {
+ // Rewrite current BC to _fast_Xputfield.
+ REWRITE_AT_PC(fast_put_type(cache->flag_state()));
+ }
}
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
//
// Now store the result
//
@@ -2276,6 +2404,10 @@
CHECK_NULL(STACK_OBJECT(-(cache->parameter_size())));
if (cache->is_vfinal()) {
callee = cache->f2_as_vfinal_method();
+ if (REWRITE_BYTECODES) {
+ // Rewrite to _fast_invokevfinal.
+ REWRITE_AT_PC(Bytecodes::_fast_invokevfinal);
+ }
} else {
// get receiver
int parms = cache->parameter_size();
@@ -2410,6 +2542,329 @@
goto opcode_switch;
}
+ CASE(_fast_agetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ VERIFY_OOP(obj->obj_field(field_offset));
+ SET_STACK_OBJECT(obj->obj_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_bgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_INT(obj->byte_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_cgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_INT(obj->char_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_dgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_DOUBLE(obj->double_field(field_offset), 0);
+ MORE_STACK(1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_fgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_FLOAT(obj->float_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_igetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_INT(obj->int_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_lgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_LONG(obj->long_field(field_offset), 0);
+ MORE_STACK(1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_sgetfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = STACK_OBJECT(-1);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_INT(obj->short_field(field_offset), -1);
+ UPDATE_PC_AND_CONTINUE(3);
+ }
+
+ CASE(_fast_aputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->obj_field_put(field_offset, STACK_OBJECT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_bputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->byte_field_put(field_offset, STACK_INT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_zputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->byte_field_put(field_offset, (STACK_INT(-1) & 1)); // only store LSB
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_cputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->char_field_put(field_offset, STACK_INT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_dputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-3);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->double_field_put(field_offset, STACK_DOUBLE(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -3);
+ }
+
+ CASE(_fast_fputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->float_field_put(field_offset, STACK_FLOAT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_iputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->int_field_put(field_offset, STACK_INT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_lputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-3);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->long_field_put(field_offset, STACK_LONG(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -3);
+ }
+
+ CASE(_fast_sputfield): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ oop obj = STACK_OBJECT(-2);
+ CHECK_NULL(obj);
+
+ MAYBE_POST_FIELD_MODIFICATION(obj);
+
+ int field_offset = cache->f2_as_index();
+ obj->short_field_put(field_offset, STACK_INT(-1));
+
+ UPDATE_PC_AND_TOS_AND_CONTINUE(3, -2);
+ }
+
+ CASE(_fast_aload_0): {
+ oop obj = LOCALS_OBJECT(0);
+ VERIFY_OOP(obj);
+ SET_STACK_OBJECT(obj, 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(1, 1);
+ }
+
+ CASE(_fast_aaccess_0): {
+ u2 index = Bytes::get_native_u2(pc+2);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = LOCALS_OBJECT(0);
+ CHECK_NULL(obj);
+ VERIFY_OOP(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ VERIFY_OOP(obj->obj_field(field_offset));
+ SET_STACK_OBJECT(obj->obj_field(field_offset), 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(4, 1);
+ }
+
+ CASE(_fast_iaccess_0): {
+ u2 index = Bytes::get_native_u2(pc+2);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = LOCALS_OBJECT(0);
+ CHECK_NULL(obj);
+ VERIFY_OOP(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_INT(obj->int_field(field_offset), 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(4, 1);
+ }
+
+ CASE(_fast_faccess_0): {
+ u2 index = Bytes::get_native_u2(pc+2);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+ int field_offset = cache->f2_as_index();
+
+ oop obj = LOCALS_OBJECT(0);
+ CHECK_NULL(obj);
+ VERIFY_OOP(obj);
+
+ MAYBE_POST_FIELD_ACCESS(obj);
+
+ SET_STACK_FLOAT(obj->float_field(field_offset), 0);
+ UPDATE_PC_AND_TOS_AND_CONTINUE(4, 1);
+ }
+
+ CASE(_fast_invokevfinal): {
+ u2 index = Bytes::get_native_u2(pc+1);
+ ConstantPoolCacheEntry* cache = cp->entry_at(index);
+
+ assert(cache->is_resolved(Bytecodes::_invokevirtual), "Should be resolved before rewriting");
+
+ istate->set_msg(call_method);
+
+ CHECK_NULL(STACK_OBJECT(-(cache->parameter_size())));
+ Method* callee = cache->f2_as_vfinal_method();
+ istate->set_callee(callee);
+ if (JVMTI_ENABLED && THREAD->is_interp_only_mode()) {
+ istate->set_callee_entry_point(callee->interpreter_entry());
+ } else {
+ istate->set_callee_entry_point(callee->from_interpreted_entry());
+ }
+ istate->set_bcp_advance(3);
+ UPDATE_PC_AND_RETURN(0);
+ }
+
DEFAULT:
fatal("Unimplemented opcode %d = %s", opcode,
Bytecodes::name((Bytecodes::Code)opcode));
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/interpreter/zero/bytecodeInterpreter.hpp openjdk-17-17.0.7+7/src/hotspot/share/interpreter/zero/bytecodeInterpreter.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/interpreter/zero/bytecodeInterpreter.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/interpreter/zero/bytecodeInterpreter.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -503,7 +503,7 @@
static void dup2_x2(intptr_t *tos); /* insert top 2 slots four down */
static void swap(intptr_t *tos); /* swap top two elements */
-template
+template
static void run(interpreterState istate);
static void astore(intptr_t* topOfStack, int stack_offset,
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/metadata/metadata.xml openjdk-17-17.0.7+7/src/hotspot/share/jfr/metadata/metadata.xml
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/metadata/metadata.xml 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/metadata/metadata.xml 2023-04-12 20:11:58.000000000 +0000
@@ -494,12 +494,16 @@
-
+
-
+
@@ -510,13 +514,17 @@
-
+
-
+
@@ -527,7 +535,9 @@
-
+
@@ -543,7 +553,9 @@
-
+
@@ -556,7 +568,9 @@
-
+
@@ -666,11 +680,15 @@
-
+
-
+
@@ -679,7 +697,9 @@
-
+
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -230,64 +230,72 @@
}
// offsets into the JfrCheckpointEntry
-static const juint starttime_offset = sizeof(jlong);
-static const juint duration_offset = starttime_offset + sizeof(jlong);
-static const juint checkpoint_type_offset = duration_offset + sizeof(jlong);
-static const juint types_offset = checkpoint_type_offset + sizeof(juint);
-static const juint payload_offset = types_offset + sizeof(juint);
+static const size_t starttime_offset = sizeof(int64_t);
+static const size_t duration_offset = starttime_offset + sizeof(int64_t);
+static const size_t checkpoint_type_offset = duration_offset + sizeof(int64_t);
+static const size_t types_offset = checkpoint_type_offset + sizeof(uint32_t);
+static const size_t payload_offset = types_offset + sizeof(uint32_t);
template
static Return read_data(const u1* data) {
return JfrBigEndian::read(data);
}
-static jlong total_size(const u1* data) {
- return read_data(data);
+static size_t total_size(const u1* data) {
+ const int64_t size = read_data(data);
+ assert(size > 0, "invariant");
+ return static_cast(size);
}
-static jlong starttime(const u1* data) {
- return read_data(data + starttime_offset);
+static int64_t starttime(const u1* data) {
+ return read_data(data + starttime_offset);
}
-static jlong duration(const u1* data) {
- return read_data(data + duration_offset);
+static int64_t duration(const u1* data) {
+ return read_data(data + duration_offset);
}
-static u1 checkpoint_type(const u1* data) {
- return read_data(data + checkpoint_type_offset);
+static uint8_t checkpoint_type(const u1* data) {
+ return read_data(data + checkpoint_type_offset);
}
-static juint number_of_types(const u1* data) {
- return read_data(data + types_offset);
+static uint32_t number_of_types(const u1* data) {
+ return read_data(data + types_offset);
}
-static void write_checkpoint_header(JfrChunkWriter& cw, int64_t delta_to_last_checkpoint, const u1* data) {
- cw.reserve(sizeof(u4));
- cw.write(EVENT_CHECKPOINT);
- cw.write(starttime(data));
- cw.write(duration(data));
- cw.write(delta_to_last_checkpoint);
- cw.write(checkpoint_type(data));
- cw.write(number_of_types(data));
+static size_t payload_size(const u1* data) {
+ return total_size(data) - sizeof(JfrCheckpointEntry);
}
-static void write_checkpoint_content(JfrChunkWriter& cw, const u1* data, size_t size) {
- assert(data != NULL, "invariant");
- cw.write_unbuffered(data + payload_offset, size - sizeof(JfrCheckpointEntry));
+static uint64_t calculate_event_size_bytes(JfrChunkWriter& cw, const u1* data, int64_t event_begin, int64_t delta_to_last_checkpoint) {
+ assert(data != nullptr, "invariant");
+ size_t bytes = cw.size_in_bytes(EVENT_CHECKPOINT);
+ bytes += cw.size_in_bytes(starttime(data));
+ bytes += cw.size_in_bytes(duration(data));
+ bytes += cw.size_in_bytes(delta_to_last_checkpoint);
+ bytes += cw.size_in_bytes(checkpoint_type(data));
+ bytes += cw.size_in_bytes(number_of_types(data));
+ bytes += payload_size(data); // in bytes already.
+ return bytes + cw.size_in_bytes(bytes + cw.size_in_bytes(bytes));
}
static size_t write_checkpoint_event(JfrChunkWriter& cw, const u1* data) {
assert(data != NULL, "invariant");
const int64_t event_begin = cw.current_offset();
const int64_t last_checkpoint_event = cw.last_checkpoint_offset();
- const int64_t delta_to_last_checkpoint = last_checkpoint_event == 0 ? 0 : last_checkpoint_event - event_begin;
- const int64_t checkpoint_size = total_size(data);
- write_checkpoint_header(cw, delta_to_last_checkpoint, data);
- write_checkpoint_content(cw, data, checkpoint_size);
- const int64_t event_size = cw.current_offset() - event_begin;
- cw.write_padded_at_offset(event_size, event_begin);
cw.set_last_checkpoint_offset(event_begin);
- return (size_t)checkpoint_size;
+ const int64_t delta_to_last_checkpoint = last_checkpoint_event == 0 ? 0 : last_checkpoint_event - event_begin;
+ const uint64_t event_size = calculate_event_size_bytes(cw, data, event_begin, delta_to_last_checkpoint);
+ cw.write(event_size);
+ cw.write(EVENT_CHECKPOINT);
+ cw.write(starttime(data));
+ cw.write(duration(data));
+ cw.write(delta_to_last_checkpoint);
+ cw.write(checkpoint_type(data));
+ cw.write(number_of_types(data));
+ cw.write_unbuffered(data + payload_offset, payload_size(data));
+ assert(static_cast(cw.current_offset() - event_begin) == event_size, "invariant");
+ return total_size(data);
}
static size_t write_checkpoints(JfrChunkWriter& cw, const u1* data, size_t size) {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -132,7 +132,15 @@
template
inline void JfrTraceIdBits::store(jbyte bits, const T* ptr) {
assert(ptr != NULL, "invariant");
+ // gcc12 warns "writing 1 byte into a region of size 0" when T == Klass.
+ // The warning seems to be a false positive. And there is no warning for
+ // other types that use the same mechanisms. The warning also sometimes
+ // goes away with minor code perturbations, such as replacing function calls
+ // with equivalent code directly inlined.
+ PRAGMA_DIAG_PUSH
+ PRAGMA_DISABLE_GCC_WARNING("-Wstringop-overflow")
set(bits, traceid_tag_byte(ptr));
+ PRAGMA_DIAG_POP
}
template
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/repository/jfrChunkRotation.cpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/repository/jfrChunkRotation.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/repository/jfrChunkRotation.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/repository/jfrChunkRotation.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -38,7 +38,7 @@
// read static field
HandleMark hm(thread);
static const char klass[] = "jdk/jfr/internal/JVM";
- static const char field[] = "FILE_DELTA_CHANGE";
+ static const char field[] = "CHUNK_ROTATION_MONITOR";
static const char signature[] = "Ljava/lang/Object;";
JavaValue result(T_OBJECT);
JfrJavaArguments field_args(&result, klass, field, signature, thread);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/service/jfrRecorderService.cpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/service/jfrRecorderService.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/recorder/service/jfrRecorderService.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/recorder/service/jfrRecorderService.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -185,8 +185,8 @@
return (u4) _content.elements();
}
- u4 size() const {
- return (u4)(end_offset() - start_offset());
+ u8 size() const {
+ return (u8)(end_offset() - start_offset());
}
void write_elements(int64_t offset) {
@@ -194,7 +194,7 @@
}
void write_size() {
- _cw.write_padded_at_offset(size(), start_offset());
+ _cw.write_padded_at_offset(size(), start_offset());
}
void set_last_checkpoint() {
@@ -209,7 +209,7 @@
static int64_t write_checkpoint_event_prologue(JfrChunkWriter& cw, u8 type_id) {
const int64_t last_cp_offset = cw.last_checkpoint_offset();
const int64_t delta_to_last_checkpoint = 0 == last_cp_offset ? 0 : last_cp_offset - cw.current_offset();
- cw.reserve(sizeof(u4));
+ cw.reserve(sizeof(u8));
cw.write(EVENT_CHECKPOINT);
cw.write(JfrTicks::now());
cw.write(0); // duration
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/support/jfrIntrinsics.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/support/jfrIntrinsics.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/support/jfrIntrinsics.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/support/jfrIntrinsics.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -37,6 +37,7 @@
template(jdk_jfr_internal_JVM, "jdk/jfr/internal/JVM") \
template(jdk_jfr_internal_handlers_EventHandler_signature, "Ljdk/jfr/internal/handlers/EventHandler;") \
template(eventHandler_name, "eventHandler") \
+ template(jfr_chunk_rotation_monitor, "jdk/jfr/internal/JVM$ChunkRotationMonitor") \
#define JFR_INTRINSICS(do_intrinsic, do_class, do_name, do_signature, do_alias) \
do_intrinsic(_counterTime, jdk_jfr_internal_JVM, counterTime_name, void_long_signature, F_SN) \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrEncoders.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrEncoders.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrEncoders.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrEncoders.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -63,6 +63,9 @@
template
static size_t encode_padded(const T* src, size_t len, u1* dest);
+ template
+ static size_t size_in_bytes(T value);
+
};
template
@@ -129,6 +132,17 @@
return size;
}
+template
+inline size_t BigEndianEncoderImpl::size_in_bytes(T value) {
+ switch (sizeof(T)) {
+ case 1: return 1;
+ case 2: return 2;
+ case 4: return 4;
+ case 8:return 8;
+ }
+ ShouldNotReachHere();
+ return 0;
+}
// The Varint128 encoder implements encoding according to
// msb(it) 128bit encoding (1 encode bit | 7 value bits),
@@ -160,6 +174,9 @@
template
static size_t encode_padded(const T* src, size_t len, u1* dest);
+ template
+ static size_t size_in_bytes(T value);
+
};
template
@@ -295,4 +312,34 @@
return size;
}
+template
+inline size_t Varint128EncoderImpl::size_in_bytes(T value) {
+ const u8 v = to_u8(value);
+ if (LESS_THAN_128(v)) {
+ return 1;
+ }
+ if (LESS_THAN_128(v >> 7)) {
+ return 2;
+ }
+ if (LESS_THAN_128(v >> 14)) {
+ return 3;
+ }
+ if (LESS_THAN_128(v >> 21)) {
+ return 4;
+ }
+ if (LESS_THAN_128(v >> 28)) {
+ return 5;
+ }
+ if (LESS_THAN_128(v >> 35)) {
+ return 6;
+ }
+ if (LESS_THAN_128(v >> 42)) {
+ return 7;
+ }
+ if (LESS_THAN_128(v >> 49)) {
+ return 8;
+ }
+ return 9;
+}
+
#endif // SHARE_JFR_WRITERS_JFRENCODERS_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrEncoding.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrEncoding.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrEncoding.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrEncoding.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -70,6 +70,11 @@
}
template
+ static size_t size_in_bytes(T value) {
+ return IntegerEncoder::size_in_bytes(value);
+ }
+
+ template
static u1* write(T value, u1* pos) {
return write(&value, 1, pos);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrWriterHost.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrWriterHost.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrWriterHost.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrWriterHost.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -97,6 +97,8 @@
template
void write_be_at_offset(T value, int64_t offset);
int64_t reserve(size_t size);
+ template
+ size_t size_in_bytes(T value);
};
#endif // SHARE_JFR_WRITERS_JFRWRITERHOST_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jfr/writers/jfrWriterHost.inline.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -360,4 +360,10 @@
}
}
+template
+template
+inline size_t WriterHost::size_in_bytes(T value) {
+ return IE::size_in_bytes(value);
+}
+
#endif // SHARE_JFR_WRITERS_JFRWRITERHOST_INLINE_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/jvmci/jvmciEnv.cpp openjdk-17-17.0.7+7/src/hotspot/share/jvmci/jvmciEnv.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/jvmci/jvmciEnv.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/jvmci/jvmciEnv.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -303,12 +303,28 @@
int buffer_size = 2048;
while (true) {
ResourceMark rm;
- jlong buffer = (jlong) NEW_RESOURCE_ARRAY_IN_THREAD(THREAD, jbyte, buffer_size);
- int res = encode(THREAD, runtimeKlass, buffer, buffer_size);
- if ((_from_env != nullptr && _from_env->has_pending_exception()) || HAS_PENDING_EXCEPTION) {
- JVMCIRuntime::fatal_exception(_from_env, "HotSpotJVMCIRuntime.encodeThrowable should not throw an exception");
+ jlong buffer = (jlong) NEW_RESOURCE_ARRAY_IN_THREAD_RETURN_NULL(THREAD, jbyte, buffer_size);
+ if (buffer == 0L) {
+ decode(THREAD, runtimeKlass, 0L);
+ return;
}
- if (res < 0) {
+ int res = encode(THREAD, runtimeKlass, buffer, buffer_size);
+ if (_from_env != nullptr && !_from_env->is_hotspot() && _from_env->has_pending_exception()) {
+ // Cannot get name of exception thrown by `encode` as that involves
+ // calling into libjvmci which in turn can raise another exception.
+ _from_env->clear_pending_exception();
+ decode(THREAD, runtimeKlass, -2L);
+ return;
+ } else if (HAS_PENDING_EXCEPTION) {
+ Symbol *ex_name = PENDING_EXCEPTION->klass()->name();
+ CLEAR_PENDING_EXCEPTION;
+ if (ex_name == vmSymbols::java_lang_OutOfMemoryError()) {
+ decode(THREAD, runtimeKlass, -1L);
+ } else {
+ decode(THREAD, runtimeKlass, -2L);
+ }
+ return;
+ } else if (res < 0) {
int required_buffer_size = -res;
if (required_buffer_size > buffer_size) {
buffer_size = required_buffer_size;
@@ -316,7 +332,7 @@
} else {
decode(THREAD, runtimeKlass, buffer);
if (!_to_env->has_pending_exception()) {
- JVMCIRuntime::fatal_exception(_to_env, "HotSpotJVMCIRuntime.decodeAndThrowThrowable should throw an exception");
+ _to_env->throw_InternalError("HotSpotJVMCIRuntime.decodeAndThrowThrowable should have thrown an exception");
}
return;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/arena.cpp openjdk-17-17.0.7+7/src/hotspot/share/memory/arena.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/arena.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/arena.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -419,10 +419,10 @@
// Determine if pointer belongs to this Arena or not.
bool Arena::contains( const void *ptr ) const {
+ if (_chunk == NULL) return false;
#ifdef ASSERT
if (UseMallocOnly) {
// really slow, but not easy to make fast
- if (_chunk == NULL) return false;
char** bottom = (char**)_chunk->bottom();
for (char** p = (char**)_hwm - 1; p >= bottom; p--) {
if (*p == ptr) return true;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/chunkManager.cpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/chunkManager.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/chunkManager.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/chunkManager.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -316,24 +316,11 @@
const size_t reserved_before = _vslist->reserved_words();
const size_t committed_before = _vslist->committed_words();
- int num_nodes_purged = 0;
- // We purge to return unused memory to the Operating System. We do this in
- // two independent steps.
-
- // 1) We purge the virtual space list: any memory mappings which are
- // completely deserted can be potentially unmapped. We iterate over the list
- // of mappings (VirtualSpaceList::purge) and delete every node whose memory
- // only contains free chunks. Deleting that node includes unmapping its memory,
- // so all chunk vanish automatically.
- // Of course we need to remove the chunk headers of those vanished chunks from
- // the ChunkManager freelist.
- num_nodes_purged = _vslist->purge(&_chunks);
- InternalStats::inc_num_purges();
-
- // 2) Since (1) is rather ineffective - it is rare that a whole node only contains
- // free chunks - we now iterate over all remaining free chunks and
- // and uncommit those which can be uncommitted (>= commit granule size).
+ // We return unused memory to the Operating System: we iterate over all
+ // free chunks and uncommit the backing memory of those large enough to
+ // contain one or multiple commit granules (chunks larger than a granule
+ // always cover a whole number of granules and start at a granule boundary).
if (Settings::uncommit_free_chunks()) {
const chunklevel_t max_level =
chunklevel::level_fitting_word_size(Settings::commit_granule_words());
@@ -365,7 +352,6 @@
ls.print("committed: ");
print_word_size_delta(&ls, committed_before, committed_after);
ls.cr();
- ls.print_cr("full nodes purged: %d", num_nodes_purged);
}
}
DEBUG_ONLY(_vslist->verify_locked());
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/chunklevel.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/chunklevel.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/chunklevel.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/chunklevel.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -46,19 +46,13 @@
// From there on it goes:
//
// size level
-// 4MB 0
-// 2MB 1
-// 1MB 2
-// 512K 3
-// 256K 4
-// 128K 5
-// 64K 6
-// 32K 7
-// 16K 8
-// 8K 9
-// 4K 10
-// 2K 11
-// 1K 12
+// 16MB 0
+// 8MB 1
+// 4MB 2
+// ...
+// 4K 12
+// 2K 13
+// 1K 14
// Metachunk level (must be signed)
typedef signed char chunklevel_t;
@@ -67,8 +61,8 @@
namespace chunklevel {
-static const size_t MAX_CHUNK_BYTE_SIZE = 4 * M;
-static const int NUM_CHUNK_LEVELS = 13;
+static const size_t MAX_CHUNK_BYTE_SIZE = 16 * M;
+static const int NUM_CHUNK_LEVELS = 15;
static const size_t MIN_CHUNK_BYTE_SIZE = (MAX_CHUNK_BYTE_SIZE >> ((size_t)NUM_CHUNK_LEVELS - 1));
static const size_t MIN_CHUNK_WORD_SIZE = MIN_CHUNK_BYTE_SIZE / sizeof(MetaWord);
@@ -101,22 +95,24 @@
chunklevel_t level_fitting_word_size(size_t word_size);
// Shorthands to refer to exact sizes
-static const chunklevel_t CHUNK_LEVEL_4M = ROOT_CHUNK_LEVEL;
-static const chunklevel_t CHUNK_LEVEL_2M = (ROOT_CHUNK_LEVEL + 1);
-static const chunklevel_t CHUNK_LEVEL_1M = (ROOT_CHUNK_LEVEL + 2);
-static const chunklevel_t CHUNK_LEVEL_512K = (ROOT_CHUNK_LEVEL + 3);
-static const chunklevel_t CHUNK_LEVEL_256K = (ROOT_CHUNK_LEVEL + 4);
-static const chunklevel_t CHUNK_LEVEL_128K = (ROOT_CHUNK_LEVEL + 5);
-static const chunklevel_t CHUNK_LEVEL_64K = (ROOT_CHUNK_LEVEL + 6);
-static const chunklevel_t CHUNK_LEVEL_32K = (ROOT_CHUNK_LEVEL + 7);
-static const chunklevel_t CHUNK_LEVEL_16K = (ROOT_CHUNK_LEVEL + 8);
-static const chunklevel_t CHUNK_LEVEL_8K = (ROOT_CHUNK_LEVEL + 9);
-static const chunklevel_t CHUNK_LEVEL_4K = (ROOT_CHUNK_LEVEL + 10);
-static const chunklevel_t CHUNK_LEVEL_2K = (ROOT_CHUNK_LEVEL + 11);
-static const chunklevel_t CHUNK_LEVEL_1K = (ROOT_CHUNK_LEVEL + 12);
+static const chunklevel_t CHUNK_LEVEL_16M = ROOT_CHUNK_LEVEL;
+static const chunklevel_t CHUNK_LEVEL_8M = (ROOT_CHUNK_LEVEL + 1);
+static const chunklevel_t CHUNK_LEVEL_4M = (ROOT_CHUNK_LEVEL + 2);
+static const chunklevel_t CHUNK_LEVEL_2M = (ROOT_CHUNK_LEVEL + 3);
+static const chunklevel_t CHUNK_LEVEL_1M = (ROOT_CHUNK_LEVEL + 4);
+static const chunklevel_t CHUNK_LEVEL_512K = (ROOT_CHUNK_LEVEL + 5);
+static const chunklevel_t CHUNK_LEVEL_256K = (ROOT_CHUNK_LEVEL + 6);
+static const chunklevel_t CHUNK_LEVEL_128K = (ROOT_CHUNK_LEVEL + 7);
+static const chunklevel_t CHUNK_LEVEL_64K = (ROOT_CHUNK_LEVEL + 8);
+static const chunklevel_t CHUNK_LEVEL_32K = (ROOT_CHUNK_LEVEL + 9);
+static const chunklevel_t CHUNK_LEVEL_16K = (ROOT_CHUNK_LEVEL + 10);
+static const chunklevel_t CHUNK_LEVEL_8K = (ROOT_CHUNK_LEVEL + 11);
+static const chunklevel_t CHUNK_LEVEL_4K = (ROOT_CHUNK_LEVEL + 12);
+static const chunklevel_t CHUNK_LEVEL_2K = (ROOT_CHUNK_LEVEL + 13);
+static const chunklevel_t CHUNK_LEVEL_1K = (ROOT_CHUNK_LEVEL + 14);
STATIC_ASSERT(CHUNK_LEVEL_1K == HIGHEST_CHUNK_LEVEL);
-STATIC_ASSERT(CHUNK_LEVEL_4M == LOWEST_CHUNK_LEVEL);
+STATIC_ASSERT(CHUNK_LEVEL_16M == LOWEST_CHUNK_LEVEL);
STATIC_ASSERT(ROOT_CHUNK_LEVEL == LOWEST_CHUNK_LEVEL);
/////////////////////////////////////////////////////////
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/internalStats.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/internalStats.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/internalStats.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/internalStats.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -92,9 +92,6 @@
/* Number of chunk in place enlargements */ \
x(num_chunks_enlarged) \
\
- /* Number of times we did a purge */ \
- x(num_purges) \
- \
/* Number of times we read inconsistent stats. */ \
x(num_inconsistent_stats) \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/metaspaceSettings.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/metaspaceSettings.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/metaspaceSettings.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/metaspaceSettings.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -42,13 +42,12 @@
// The default size of a VirtualSpaceNode, unless created with an explicitly specified size.
// Must be a multiple of the root chunk size.
- // Increasing this value decreases the number of mappings used for metadata,
- // at the cost of increased virtual size used for Metaspace (or, at least,
- // coarser growth steps). Matters mostly for 32bit platforms due to limited
- // address space.
- // The default of two root chunks has been chosen on a whim but seems to work out okay
- // (coming to a mapping size of 8m per node).
- static const size_t _virtual_space_node_default_word_size = chunklevel::MAX_CHUNK_WORD_SIZE * 2;
+ // This value only affects the process virtual size, and there only the granularity with which it
+ // increases. Matters mostly for 32bit platforms due to limited address space.
+ // Note that this only affects the non-class metaspace. Class space ignores this size (it is one
+ // single large mapping).
+ static const size_t _virtual_space_node_default_word_size =
+ chunklevel::MAX_CHUNK_WORD_SIZE * NOT_LP64(1) LP64_ONLY(4); // 16MB (32-bit) / 64MB (64-bit)
// Alignment of the base address of a virtual space node
static const size_t _virtual_space_node_reserve_alignment_words = chunklevel::MAX_CHUNK_WORD_SIZE;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/rootChunkArea.cpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/rootChunkArea.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/rootChunkArea.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/rootChunkArea.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -481,16 +481,6 @@
FREE_C_HEAP_ARRAY(RootChunkArea, _arr);
}
-// Returns true if all areas in this area table are free (only contain free chunks).
-bool RootChunkAreaLUT::is_free() const {
- for (int i = 0; i < _num; i++) {
- if (!_arr[i].is_free()) {
- return false;
- }
- }
- return true;
-}
-
#ifdef ASSERT
void RootChunkAreaLUT::verify() const {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/rootChunkArea.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/rootChunkArea.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/rootChunkArea.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/rootChunkArea.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -107,10 +107,6 @@
size_t word_size() const { return chunklevel::MAX_CHUNK_WORD_SIZE; }
const MetaWord* end() const { return _base + word_size(); }
- // Direct access to the first chunk (use with care)
- Metachunk* first_chunk() { return _first_chunk; }
- const Metachunk* first_chunk() const { return _first_chunk; }
-
// Returns true if this root chunk area is completely free:
// In that case, it should only contain one chunk (maximally merged, so a root chunk)
// and it should be free.
@@ -182,20 +178,12 @@
return _arr + idx;
}
- // Access area by its index
- int number_of_areas() const { return _num; }
- RootChunkArea* get_area_by_index(int index) { assert(index >= 0 && index < _num, "oob"); return _arr + index; }
- const RootChunkArea* get_area_by_index(int index) const { assert(index >= 0 && index < _num, "oob"); return _arr + index; }
-
/// range ///
const MetaWord* base() const { return _base; }
size_t word_size() const { return _num * chunklevel::MAX_CHUNK_WORD_SIZE; }
const MetaWord* end() const { return _base + word_size(); }
- // Returns true if all areas in this area table are free (only contain free chunks).
- bool is_free() const;
-
DEBUG_ONLY(void verify() const;)
void print_on(outputStream* st) const;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceList.cpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceList.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceList.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceList.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -34,6 +34,7 @@
#include "memory/metaspace/metaspaceCommon.hpp"
#include "memory/metaspace/virtualSpaceList.hpp"
#include "memory/metaspace/virtualSpaceNode.hpp"
+#include "runtime/atomic.hpp"
#include "runtime/mutexLocker.hpp"
namespace metaspace {
@@ -74,8 +75,10 @@
VirtualSpaceList::~VirtualSpaceList() {
assert_lock_strong(Metaspace_lock);
- // Note: normally, there is no reason ever to delete a vslist since they are
- // global objects, but for gtests it makes sense to allow this.
+ // Delete every single mapping in this list.
+ // Please note that this only gets executed during gtests under controlled
+ // circumstances, so we do not have any concurrency issues here. The "real"
+ // lists in metaspace are immortal.
VirtualSpaceNode* vsn = _first_node;
VirtualSpaceNode* vsn2 = vsn;
while (vsn != NULL) {
@@ -96,7 +99,7 @@
_commit_limiter,
&_reserved_words_counter, &_committed_words_counter);
vsn->set_next(_first_node);
- _first_node = vsn;
+ Atomic::release_store(&_first_node, vsn);
_nodes_counter.increment();
}
@@ -134,43 +137,6 @@
return c;
}
-// Attempts to purge nodes. This will remove and delete nodes which only contain free chunks.
-// The free chunks are removed from the freelists before the nodes are deleted.
-// Return number of purged nodes.
-int VirtualSpaceList::purge(FreeChunkListVector* freelists) {
- assert_lock_strong(Metaspace_lock);
- UL(debug, "purging.");
-
- VirtualSpaceNode* vsn = _first_node;
- VirtualSpaceNode* prev_vsn = NULL;
- int num = 0, num_purged = 0;
- while (vsn != NULL) {
- VirtualSpaceNode* next_vsn = vsn->next();
- bool purged = vsn->attempt_purge(freelists);
- if (purged) {
- // Note: from now on do not dereference vsn!
- UL2(debug, "purged node @" PTR_FORMAT ".", p2i(vsn));
- if (_first_node == vsn) {
- _first_node = next_vsn;
- }
- DEBUG_ONLY(vsn = (VirtualSpaceNode*)((uintptr_t)(0xdeadbeef));)
- if (prev_vsn != NULL) {
- prev_vsn->set_next(next_vsn);
- }
- num_purged++;
- _nodes_counter.decrement();
- } else {
- prev_vsn = vsn;
- }
- vsn = next_vsn;
- num ++;
- }
-
- UL2(debug, "purged %d nodes (before: %d, now: %d)",
- num_purged, num, num_nodes());
- return num_purged;
-}
-
// Print all nodes in this space list.
void VirtualSpaceList::print_on(outputStream* st) const {
MutexLocker fcl(Metaspace_lock, Mutex::_no_safepoint_check_flag);
@@ -223,7 +189,8 @@
// Returns true if this pointer is contained in one of our nodes.
bool VirtualSpaceList::contains(const MetaWord* p) const {
- const VirtualSpaceNode* vsn = _first_node;
+ // Note: needs to work without locks.
+ const VirtualSpaceNode* vsn = Atomic::load_acquire(&_first_node);
while (vsn != NULL) {
if (vsn->contains(p)) {
return true;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceList.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceList.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceList.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceList.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -40,8 +40,7 @@
class Metachunk;
class FreeChunkListVector;
-// VirtualSpaceList manages a single (if its non-expandable) or
-// a series of (if its expandable) virtual memory regions used
+// VirtualSpaceList manages a series of virtual memory regions used
// for metaspace.
//
// Internally it holds a list of nodes (VirtualSpaceNode) each
@@ -49,17 +48,24 @@
// this list is the current node and used for allocation of new
// root chunks.
//
-// Beyond access to those nodes and the ability to grow new nodes
-// (if expandable) it allows for purging: purging this list means
-// removing and unmapping all memory regions which are unused.
+// The list will only ever grow, never shrink. It will be immortal,
+// never to be destroyed.
+//
+// The list will only be modified under lock protection, but may be
+// read concurrently without lock.
+//
+// The list may be prevented from expanding beyond a single node -
+// in that case it degenerates to a one-node-list (used for
+// class space).
+//
class VirtualSpaceList : public CHeapObj {
// Name
const char* const _name;
- // Head of the list.
- VirtualSpaceNode* _first_node;
+ // Head of the list (last added).
+ VirtualSpaceNode* volatile _first_node;
// Number of nodes (kept for statistics only).
IntCounter _nodes_counter;
@@ -101,11 +107,6 @@
// the list cannot be expanded (in practice this means we reached CompressedClassSpaceSize).
Metachunk* allocate_root_chunk();
- // Attempts to purge nodes. This will remove and delete nodes which only contain free chunks.
- // The free chunks are removed from the freelists before the nodes are deleted.
- // Return number of purged nodes.
- int purge(FreeChunkListVector* freelists);
-
//// Statistics ////
// Return sum of reserved words in all nodes.
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceNode.cpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceNode.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceNode.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceNode.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -369,48 +369,6 @@
return rc;
}
-// Attempts to purge the node:
-//
-// If all chunks living in this node are free, they will all be removed from
-// the freelist they currently reside in. Then, the node will be deleted.
-//
-// Returns true if the node has been deleted, false if not.
-// !! If this returns true, do not access the node from this point on. !!
-bool VirtualSpaceNode::attempt_purge(FreeChunkListVector* freelists) {
- assert_lock_strong(Metaspace_lock);
-
- if (!_owns_rs) {
- // We do not allow purging of nodes if we do not own the
- // underlying ReservedSpace (CompressClassSpace case).
- return false;
- }
-
- // First find out if all areas are empty. Since empty chunks collapse to root chunk
- // size, if all chunks in this node are free root chunks we are good to go.
- if (!_root_chunk_area_lut.is_free()) {
- return false;
- }
-
- UL(debug, ": purging.");
-
- // Okay, we can purge. Before we can do this, we need to remove all chunks from the freelist.
- for (int narea = 0; narea < _root_chunk_area_lut.number_of_areas(); narea++) {
- RootChunkArea* ra = _root_chunk_area_lut.get_area_by_index(narea);
- Metachunk* c = ra->first_chunk();
- if (c != NULL) {
- UL2(trace, "removing chunk from to-be-purged node: "
- METACHUNK_FULL_FORMAT ".", METACHUNK_FULL_FORMAT_ARGS(c));
- assert(c->is_free() && c->is_root_chunk(), "Sanity");
- freelists->remove(c);
- }
- }
-
- // Now, delete the node, then right away return since this object is invalid.
- delete this;
-
- return true;
-}
-
void VirtualSpaceNode::print_on(outputStream* st) const {
size_t scale = K;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceNode.hpp openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceNode.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/memory/metaspace/virtualSpaceNode.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/memory/metaspace/virtualSpaceNode.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -208,15 +208,6 @@
// On success, true is returned, false otherwise.
bool attempt_enlarge_chunk(Metachunk* c, FreeChunkListVector* freelists);
- // Attempts to purge the node:
- //
- // If all chunks living in this node are free, they will all be removed from
- // the freelist they currently reside in. Then, the node will be deleted.
- //
- // Returns true if the node has been deleted, false if not.
- // !! If this returns true, do not access the node from this point on. !!
- bool attempt_purge(FreeChunkListVector* freelists);
-
// Attempts to uncommit free areas according to the rules set in settings.
// Returns number of words uncommitted.
size_t uncommit_free_areas();
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/oops/array.hpp openjdk-17-17.0.7+7/src/hotspot/share/oops/array.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/oops/array.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/oops/array.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -91,7 +91,7 @@
Array(int length, T init) : _length(length) {
assert(length >= 0, "illegal length");
for (int i = 0; i < length; i++) {
- _data[i] = init;
+ data()[i] = init;
}
}
@@ -99,12 +99,22 @@
// standard operations
int length() const { return _length; }
- T* data() { return _data; }
+
+ T* data() {
+ return reinterpret_cast(
+ reinterpret_cast(this) + base_offset_in_bytes());
+ }
+
+ const T* data() const {
+ return reinterpret_cast(
+ reinterpret_cast(this) + base_offset_in_bytes());
+ }
+
bool is_empty() const { return length() == 0; }
int index_of(const T& x) const {
int i = length();
- while (i-- > 0 && _data[i] != x) ;
+ while (i-- > 0 && data()[i] != x) ;
return i;
}
@@ -112,9 +122,9 @@
// sort the array.
bool contains(const T& x) const { return index_of(x) >= 0; }
- T at(int i) const { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); return _data[i]; }
- void at_put(const int i, const T& x) { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); _data[i] = x; }
- T* adr_at(const int i) { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); return &_data[i]; }
+ T at(int i) const { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); return data()[i]; }
+ void at_put(const int i, const T& x) { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); data()[i] = x; }
+ T* adr_at(const int i) { assert(i >= 0 && i< _length, "oob: 0 <= %d < %d", i, _length); return &data()[i]; }
int find(const T& x) { return index_of(x); }
T at_acquire(const int i) { return Atomic::load_acquire(adr_at(i)); }
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/oops/instanceKlass.cpp openjdk-17-17.0.7+7/src/hotspot/share/oops/instanceKlass.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/oops/instanceKlass.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/oops/instanceKlass.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1021,6 +1021,59 @@
}
}
+ResourceHashtable,
+ primitive_equals,
+ 107,
+ ResourceObj::C_HEAP,
+ mtClass>
+ _initialization_error_table;
+
+void InstanceKlass::add_initialization_error(JavaThread* current, Handle exception) {
+ // Create the same exception with a message indicating the thread name,
+ // and the StackTraceElements.
+ // If the initialization error is OOM, this might not work, but if GC kicks in
+ // this would be still be helpful.
+ JavaThread* THREAD = current;
+ Handle cause = java_lang_Throwable::get_cause_with_stack_trace(exception, THREAD);
+ if (HAS_PENDING_EXCEPTION || cause.is_null()) {
+ CLEAR_PENDING_EXCEPTION;
+ return;
+ }
+
+ MutexLocker ml(THREAD, ClassInitError_lock);
+ OopHandle elem = OopHandle(Universe::vm_global(), cause());
+ bool created = false;
+ _initialization_error_table.put_if_absent(this, elem, &created);
+ assert(created, "Initialization is single threaded");
+ ResourceMark rm(THREAD);
+ log_trace(class, init)("Initialization error added for class %s", external_name());
+}
+
+oop InstanceKlass::get_initialization_error(JavaThread* current) {
+ MutexLocker ml(current, ClassInitError_lock);
+ OopHandle* h = _initialization_error_table.get(this);
+ return (h != nullptr) ? h->resolve() : nullptr;
+}
+
+// Need to remove entries for unloaded classes.
+void InstanceKlass::clean_initialization_error_table() {
+ struct InitErrorTableCleaner {
+ bool do_entry(const InstanceKlass* ik, OopHandle h) {
+ if (!ik->is_loader_alive()) {
+ h.release(Universe::vm_global());
+ return true;
+ } else {
+ return false;
+ }
+ }
+ };
+
+ MutexLocker ml(ClassInitError_lock);
+ InitErrorTableCleaner cleaner;
+ _initialization_error_table.unlink(&cleaner);
+}
+
void InstanceKlass::initialize_impl(TRAPS) {
HandleMark hm(THREAD);
@@ -1067,16 +1120,15 @@
if (is_in_error_state()) {
DTRACE_CLASSINIT_PROBE_WAIT(erroneous, -1, wait);
ResourceMark rm(THREAD);
- const char* desc = "Could not initialize class ";
- const char* className = external_name();
- size_t msglen = strlen(desc) + strlen(className) + 1;
- char* message = NEW_RESOURCE_ARRAY(char, msglen);
- if (NULL == message) {
- // Out of memory: can't create detailed error message
- THROW_MSG(vmSymbols::java_lang_NoClassDefFoundError(), className);
+ Handle cause(THREAD, get_initialization_error(THREAD));
+
+ stringStream ss;
+ ss.print("Could not initialize class %s", external_name());
+ if (cause.is_null()) {
+ THROW_MSG(vmSymbols::java_lang_NoClassDefFoundError(), ss.as_string());
} else {
- jio_snprintf(message, msglen, "%s%s", desc, className);
- THROW_MSG(vmSymbols::java_lang_NoClassDefFoundError(), message);
+ THROW_MSG_CAUSE(vmSymbols::java_lang_NoClassDefFoundError(),
+ ss.as_string(), cause);
}
}
@@ -1107,6 +1159,7 @@
CLEAR_PENDING_EXCEPTION;
{
EXCEPTION_MARK;
+ add_initialization_error(THREAD, e);
// Locks object, set state, and notify all waiting threads
set_initialization_state_and_notify(initialization_error, THREAD);
CLEAR_PENDING_EXCEPTION;
@@ -1142,9 +1195,7 @@
// Step 9
if (!HAS_PENDING_EXCEPTION) {
set_initialization_state_and_notify(fully_initialized, CHECK);
- {
- debug_only(vtable().verify(tty, true);)
- }
+ debug_only(vtable().verify(tty, true);)
}
else {
// Step 10 and 11
@@ -1155,6 +1206,7 @@
JvmtiExport::clear_detected_exception(jt);
{
EXCEPTION_MARK;
+ add_initialization_error(THREAD, e);
set_initialization_state_and_notify(initialization_error, THREAD);
CLEAR_PENDING_EXCEPTION; // ignore any exception thrown, class initialization error is thrown below
// JVMTI has already reported the pending exception
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/oops/instanceKlass.hpp openjdk-17-17.0.7+7/src/hotspot/share/oops/instanceKlass.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/oops/instanceKlass.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/oops/instanceKlass.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1196,6 +1196,7 @@
virtual Klass* array_klass(TRAPS);
virtual Klass* array_klass_or_null();
+ static void clean_initialization_error_table();
private:
void fence_and_clear_init_lock();
@@ -1205,6 +1206,9 @@
void initialize_super_interfaces (TRAPS);
void eager_initialize_impl ();
+ void add_initialization_error(JavaThread* current, Handle exception);
+ oop get_initialization_error(JavaThread* current);
+
// find a local method (returns NULL if not found)
Method* find_method_impl(const Symbol* name,
const Symbol* signature,
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/oops/method.cpp openjdk-17-17.0.7+7/src/hotspot/share/oops/method.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/oops/method.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/oops/method.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1245,15 +1245,6 @@
assert(is_method() && is_valid_method(this), "ensure C++ vtable is restored");
}
-address Method::from_compiled_entry_no_trampoline() const {
- CompiledMethod *code = Atomic::load_acquire(&_code);
- if (code) {
- return code->verified_entry_point();
- } else {
- return adapter()->get_c2i_entry();
- }
-}
-
// The verified_code_entry() must be called when a invoke is resolved
// on this method.
@@ -2287,6 +2278,8 @@
} else if ((intptr_t(m) & (wordSize-1)) != 0) {
// Quick sanity check on pointer.
return false;
+ } else if (!os::is_readable_range(m, m + 1)) {
+ return false;
} else if (m->is_shared()) {
return CppVtables::is_valid_shared_method(m);
} else if (Metaspace::contains_non_shared(m)) {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/oops/method.hpp openjdk-17-17.0.7+7/src/hotspot/share/oops/method.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/oops/method.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/oops/method.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -138,7 +138,6 @@
static address make_adapters(const methodHandle& mh, TRAPS);
address from_compiled_entry() const;
- address from_compiled_entry_no_trampoline() const;
address from_interpreted_entry() const;
// access flag
@@ -720,6 +719,8 @@
static methodHandle make_method_handle_intrinsic(vmIntrinsicID iid, // _invokeBasic, _linkToVirtual
Symbol* signature, //anything at all
TRAPS);
+ // Some special methods don't need to be findable by nmethod iterators and are permanent.
+ bool can_be_allocated_in_NonNMethod_space() const { return is_method_handle_intrinsic(); }
static Klass* check_non_bcp_klass(Klass* klass);
enum {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/block.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/block.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/block.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/block.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -151,6 +151,10 @@
return _nodes.contains(n);
}
+bool Block::is_trivially_unreachable() const {
+ return num_preds() <= 1 && !head()->is_Root() && !head()->is_Start();
+}
+
// Return empty status of a block. Empty blocks contain only the head, other
// ideal nodes, and an optional trailing goto.
int Block::is_Empty() const {
@@ -170,7 +174,7 @@
}
// Unreachable blocks are considered empty
- if (num_preds() <= 1) {
+ if (is_trivially_unreachable()) {
return success_result;
}
@@ -539,6 +543,48 @@
block->_freq = freq;
// add new basic block to basic block list
add_block_at(block_no + 1, block);
+ // Update dominator tree information of the new goto block.
+ block->_idom = in;
+ block->_dom_depth = in->_dom_depth + 1;
+ if (out->_idom != in) {
+ // The successor block was not immediately dominated by the predecessor
+ // block, so there is no dominator subtree to update.
+ return;
+ }
+ // Update immediate dominator of the successor block.
+ out->_idom = block;
+ // Increment the dominator tree depth of the goto block's descendants. These
+ // are found by a depth-first search starting from the successor block. Two
+ // domination properties guarantee that only descendant blocks are visited:
+ // 1) all dominators of a block b must appear in any path from the root to b;
+ // 2) if a block b does not dominate another block b', b cannot dominate any
+ // block reachable from b' either.
+ // The exploration uses header indices as block identifiers, since
+ // Block::_pre_order might not be unique in the context of this function.
+ ResourceMark rm;
+ VectorSet descendants;
+ descendants.set(block->head()->_idx); // The goto block is a descendant of itself.
+ Block_List worklist;
+ worklist.push(out); // Start exploring from the successor block.
+ while (worklist.size() > 0) {
+ Block* b = worklist.pop();
+ // The immediate dominator of b is a descendant, hence b is also a
+ // descendant. Even though all predecessors of b might not have been visited
+ // yet, we know that all dominators of b have been already visited (since
+ // they must appear in any path from the goto block to b).
+ descendants.set(b->head()->_idx);
+ b->_dom_depth++;
+ for (uint i = 0; i < b->_num_succs; i++) {
+ Block* s = b->_succs[i];
+ if (s != get_root_block() &&
+ !descendants.test(s->head()->_idx) &&
+ // Do not search below non-descendant successors, since any block
+ // reachable from them cannot be descendant either.
+ descendants.test(s->_idom->head()->_idx)) {
+ worklist.push(s);
+ }
+ }
+ }
}
// Does this block end in a multiway branch that cannot have the default case
@@ -574,10 +620,13 @@
// fake exit path to infinite loops. At this late stage they need to turn
// into Goto's so that when you enter the infinite loop you indeed hang.
void PhaseCFG::convert_NeverBranch_to_Goto(Block *b) {
- // Find true target
int end_idx = b->end_idx();
- int idx = b->get_node(end_idx+1)->as_Proj()->_con;
- Block *succ = b->_succs[idx];
+ NeverBranchNode* never_branch = b->get_node(end_idx)->as_NeverBranch();
+ Block* succ = get_block_for_node(never_branch->proj_out(0)->unique_ctrl_out_or_null());
+ Block* dead = get_block_for_node(never_branch->proj_out(1)->unique_ctrl_out_or_null());
+ assert(succ == b->_succs[0] || succ == b->_succs[1], "succ is a successor");
+ assert(dead == b->_succs[0] || dead == b->_succs[1], "dead is a successor");
+
Node* gto = _goto->clone(); // get a new goto node
gto->set_req(0, b->head());
Node *bp = b->get_node(end_idx);
@@ -590,19 +639,23 @@
b->_num_succs = 1;
// remap successor's predecessors if necessary
uint j;
- for( j = 1; j < succ->num_preds(); j++)
- if( succ->pred(j)->in(0) == bp )
+ for (j = 1; j < succ->num_preds(); j++) {
+ if (succ->pred(j)->in(0) == bp) {
succ->head()->set_req(j, gto);
+ }
+ }
// Kill alternate exit path
- Block *dead = b->_succs[1-idx];
- for( j = 1; j < dead->num_preds(); j++)
- if( dead->pred(j)->in(0) == bp )
+ for (j = 1; j < dead->num_preds(); j++) {
+ if (dead->pred(j)->in(0) == bp) {
break;
+ }
+ }
// Scan through block, yanking dead path from
// all regions and phis.
dead->head()->del_req(j);
- for( int k = 1; dead->get_node(k)->is_Phi(); k++ )
+ for (int k = 1; dead->get_node(k)->is_Phi(); k++) {
dead->get_node(k)->del_req(j);
+ }
}
// Helper function to move block bx to the slot following b_index. Return
@@ -689,7 +742,7 @@
// to give a fake exit path to infinite loops. At this late stage they
// need to turn into Goto's so that when you enter the infinite loop you
// indeed hang.
- if (block->get_node(block->end_idx())->Opcode() == Op_NeverBranch) {
+ if (block->get_node(block->end_idx())->is_NeverBranch()) {
convert_NeverBranch_to_Goto(block);
}
@@ -939,6 +992,46 @@
} // End of for all blocks
}
+void PhaseCFG::remove_unreachable_blocks() {
+ ResourceMark rm;
+ Block_List unreachable;
+ // Initialize worklist of unreachable blocks to be removed.
+ for (uint i = 0; i < number_of_blocks(); i++) {
+ Block* block = get_block(i);
+ assert(block->_pre_order == i, "Block::pre_order does not match block index");
+ if (block->is_trivially_unreachable()) {
+ unreachable.push(block);
+ }
+ }
+ // Now remove all blocks that are transitively unreachable.
+ while (unreachable.size() > 0) {
+ Block* dead = unreachable.pop();
+ // When this code runs (after PhaseCFG::fixup_flow()), Block::_pre_order
+ // does not contain pre-order but block-list indices. Ensure they stay
+ // contiguous by decrementing _pre_order for all elements after 'dead'.
+ // Block::_rpo does not contain valid reverse post-order indices anymore
+ // (they are invalidated by block insertions in PhaseCFG::fixup_flow()),
+ // so there is no need to update them.
+ for (uint i = dead->_pre_order + 1; i < number_of_blocks(); i++) {
+ get_block(i)->_pre_order--;
+ }
+ _blocks.remove(dead->_pre_order);
+ _number_of_blocks--;
+ // Update the successors' predecessor list and push new unreachable blocks.
+ for (uint i = 0; i < dead->_num_succs; i++) {
+ Block* succ = dead->_succs[i];
+ Node* head = succ->head();
+ for (int j = head->req() - 1; j >= 1; j--) {
+ if (get_block_for_node(head->in(j)) == dead) {
+ head->del_req(j);
+ }
+ }
+ if (succ->is_trivially_unreachable()) {
+ unreachable.push(succ);
+ }
+ }
+ }
+}
// postalloc_expand: Expand nodes after register allocation.
//
@@ -1226,6 +1319,23 @@
assert(found, "block b is not in n's home loop or an ancestor of it");
}
+void PhaseCFG::verify_dominator_tree() const {
+ for (uint i = 0; i < number_of_blocks(); i++) {
+ Block* block = get_block(i);
+ assert(block->_dom_depth <= number_of_blocks(), "unexpected dominator tree depth");
+ if (block == get_root_block()) {
+ assert(block->_dom_depth == 1, "unexpected root dominator tree depth");
+ // The root block does not have an immediate dominator, stop checking.
+ continue;
+ }
+ assert(block->_idom != nullptr, "non-root blocks must have immediate dominators");
+ assert(block->_dom_depth == block->_idom->_dom_depth + 1,
+ "the dominator tree depth of a node must succeed that of its immediate dominator");
+ assert(block->num_preds() > 2 || block->_idom == get_block_for_node(block->pred(1)),
+ "the immediate dominator of a single-predecessor block must be the predecessor");
+ }
+}
+
void PhaseCFG::verify() const {
// Verify sane CFG
for (uint i = 0; i < number_of_blocks(); i++) {
@@ -1295,6 +1405,7 @@
assert(block->_num_succs == 2, "Conditional branch must have two targets");
}
}
+ verify_dominator_tree();
}
#endif // ASSERT
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/block.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/block.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/block.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/block.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -321,6 +321,9 @@
// Check whether the node is in the block.
bool contains (const Node *n) const;
+ // Whether the block is not root-like and does not have any predecessors.
+ bool is_trivially_unreachable() const;
+
// Return the empty status of a block
enum { not_empty, empty_with_goto, completely_empty };
int is_Empty() const;
@@ -604,6 +607,10 @@
void remove_empty_blocks();
Block *fixup_trap_based_check(Node *branch, Block *block, int block_pos, Block *bnext);
void fixup_flow();
+ // Remove all blocks that are transitively unreachable. Such blocks can be
+ // found e.g. after PhaseCFG::convert_NeverBranch_to_Goto(). This function
+ // assumes post-fixup_flow() block indices (Block::_pre_order, Block::_rpo).
+ void remove_unreachable_blocks();
// Insert a node into a block at index and map the node to the block
void insert(Block *b, uint idx, Node *n) {
@@ -630,6 +637,8 @@
// Check that block b is in the home loop (or an ancestor) of n, if n is a
// memory writer.
void verify_memory_writer_placement(const Block* b, const Node* n) const NOT_DEBUG_RETURN;
+ // Check local dominator tree invariants.
+ void verify_dominator_tree() const NOT_DEBUG_RETURN;
void verify() const NOT_DEBUG_RETURN;
};
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/buildOopMap.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/buildOopMap.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/buildOopMap.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/buildOopMap.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -344,10 +344,26 @@
} else {
// Other - some reaching non-oop value
#ifdef ASSERT
- if( t->isa_rawptr() && C->cfg()->_raw_oops.member(def) ) {
- def->dump();
- n->dump();
- assert(false, "there should be a oop in OopMap instead of a live raw oop at safepoint");
+ if (t->isa_rawptr()) {
+ ResourceMark rm;
+ Unique_Node_List worklist;
+ worklist.push(def);
+ for (uint i = 0; i < worklist.size(); i++) {
+ Node* m = worklist.at(i);
+ if (C->cfg()->_raw_oops.member(m)) {
+ def->dump();
+ m->dump();
+ n->dump();
+ assert(false, "there should be an oop in OopMap instead of a live raw oop at safepoint");
+ }
+ // Check users as well because def might be spilled
+ for (DUIterator_Fast jmax, j = m->fast_outs(jmax); j < jmax; j++) {
+ Node* u = m->fast_out(j);
+ if ((u->is_SpillCopy() && u->in(1) == m) || u->is_Phi()) {
+ worklist.push(u);
+ }
+ }
+ }
}
#endif
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/bytecodeInfo.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/bytecodeInfo.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/bytecodeInfo.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/bytecodeInfo.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -43,7 +43,7 @@
JVMState* caller_jvms, int caller_bci,
int max_inline_level) :
C(c),
- _caller_jvms(caller_jvms),
+ _caller_jvms(NULL),
_method(callee),
_caller_tree((InlineTree*) caller_tree),
_count_inline_bcs(method()->code_size_for_inlining()),
@@ -55,13 +55,13 @@
_count_inlines = 0;
_forced_inline = false;
#endif
- if (_caller_jvms != NULL) {
+ if (caller_jvms != NULL) {
// Keep a private copy of the caller_jvms:
_caller_jvms = new (C) JVMState(caller_jvms->method(), caller_tree->caller_jvms());
_caller_jvms->set_bci(caller_jvms->bci());
assert(!caller_jvms->should_reexecute(), "there should be no reexecute bytecode with inlining");
+ assert(_caller_jvms->same_calls_as(caller_jvms), "consistent JVMS");
}
- assert(_caller_jvms->same_calls_as(caller_jvms), "consistent JVMS");
assert((caller_tree == NULL ? 0 : caller_tree->stack_depth() + 1) == stack_depth(), "correct (redundant) depth parameter");
assert(caller_bci == this->caller_bci(), "correct (redundant) bci parameter");
// Update hierarchical counts, count_inline_bcs() and count_inlines()
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/cfgnode.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/cfgnode.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/cfgnode.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/cfgnode.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -2717,9 +2717,8 @@
// exception oop through.
CallNode *call = in(1)->in(0)->as_Call();
- return ( in(0)->is_CatchProj() && in(0)->in(0)->in(1) == in(1) )
- ? this
- : call->in(TypeFunc::Parms);
+ return (in(0)->is_CatchProj() && in(0)->in(0)->is_Catch() &&
+ in(0)->in(0)->in(1) == in(1)) ? this : call->in(TypeFunc::Parms);
}
//=============================================================================
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/cfgnode.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/cfgnode.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/cfgnode.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/cfgnode.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -602,7 +602,10 @@
// empty.
class NeverBranchNode : public MultiBranchNode {
public:
- NeverBranchNode( Node *ctrl ) : MultiBranchNode(1) { init_req(0,ctrl); }
+ NeverBranchNode(Node* ctrl) : MultiBranchNode(1) {
+ init_req(0, ctrl);
+ init_class_id(Class_NeverBranch);
+ }
virtual int Opcode() const;
virtual bool pinned() const { return true; };
virtual const Type *bottom_type() const { return TypeTuple::IFBOTH; }
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/chaitin.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/chaitin.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/chaitin.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/chaitin.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -730,8 +730,8 @@
int yank_if_dead_recurse(Node *old, Node *orig_old, Block *current_block,
Node_List *value, Node_List *regnd);
int yank( Node *old, Block *current_block, Node_List *value, Node_List *regnd );
- int elide_copy( Node *n, int k, Block *current_block, Node_List &value, Node_List ®nd, bool can_change_regs );
- int use_prior_register( Node *copy, uint idx, Node *def, Block *current_block, Node_List &value, Node_List ®nd );
+ int elide_copy( Node *n, int k, Block *current_block, Node_List *value, Node_List *regnd, bool can_change_regs );
+ int use_prior_register( Node *copy, uint idx, Node *def, Block *current_block, Node_List *value, Node_List *regnd );
bool may_be_copy_of_callee( Node *def ) const;
// If nreg already contains the same constant as val then eliminate it
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/compile.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/compile.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/compile.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/compile.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -400,7 +400,7 @@
}
// Disconnect all useless nodes by disconnecting those at the boundary.
-void Compile::remove_useless_nodes(Unique_Node_List &useful) {
+void Compile::disconnect_useless_nodes(Unique_Node_List &useful, Unique_Node_List* worklist) {
uint next = 0;
while (next < useful.size()) {
Node *n = useful.at(next++);
@@ -423,7 +423,7 @@
}
}
if (n->outcnt() == 1 && n->has_special_unique_user()) {
- record_for_igvn(n->unique_out());
+ worklist->push(n->unique_out());
}
}
@@ -433,6 +433,11 @@
remove_useless_nodes(_expensive_nodes, useful); // remove useless expensive nodes
remove_useless_nodes(_for_post_loop_igvn, useful); // remove useless node recorded for post loop opts IGVN pass
remove_useless_coarsened_locks(useful); // remove useless coarsened locks nodes
+#ifdef ASSERT
+ if (_modified_nodes != NULL) {
+ _modified_nodes->remove_useless_nodes(useful.member_set());
+ }
+#endif
BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2();
bs->eliminate_useless_gc_barriers(useful, this);
@@ -2756,6 +2761,8 @@
cfg.set_loop_alignment();
}
cfg.fixup_flow();
+ cfg.remove_unreachable_blocks();
+ cfg.verify_dominator_tree();
}
// Apply peephole optimizations
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/compile.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/compile.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/compile.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/compile.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -932,7 +932,7 @@
void identify_useful_nodes(Unique_Node_List &useful);
void update_dead_node_list(Unique_Node_List &useful);
- void remove_useless_nodes (Unique_Node_List &useful);
+ void disconnect_useless_nodes(Unique_Node_List &useful, Unique_Node_List* worklist);
void remove_useless_node(Node* dead);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/escape.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/escape.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/escape.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/escape.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -240,7 +240,9 @@
// 3. Adjust scalar_replaceable state of nonescaping objects and push
// scalar replaceable allocations on alloc_worklist for processing
// in split_unique_types().
+ GrowableArray jobj_worklist;
int non_escaped_length = non_escaped_allocs_worklist.length();
+ bool found_nsr_alloc = false;
for (int next = 0; next < non_escaped_length; next++) {
JavaObjectNode* ptn = non_escaped_allocs_worklist.at(next);
bool noescape = (ptn->escape_state() == PointsToNode::NoEscape);
@@ -251,11 +253,25 @@
if (noescape && ptn->scalar_replaceable()) {
adjust_scalar_replaceable_state(ptn);
if (ptn->scalar_replaceable()) {
- alloc_worklist.append(ptn->ideal_node());
+ jobj_worklist.push(ptn);
+ } else {
+ found_nsr_alloc = true;
}
}
}
+ // Propagate NSR (Not Scalar Replaceable) state.
+ if (found_nsr_alloc) {
+ find_scalar_replaceable_allocs(jobj_worklist);
+ }
+
+ for (int next = 0; next < jobj_worklist.length(); ++next) {
+ JavaObjectNode* jobj = jobj_worklist.at(next);
+ if (jobj->scalar_replaceable()) {
+ alloc_worklist.append(jobj->ideal_node());
+ }
+ }
+
#ifdef ASSERT
if (VerifyConnectionGraph) {
// Verify that graph is complete - no new edges could be added or needed.
@@ -1840,15 +1856,19 @@
jobj->set_scalar_replaceable(false);
return;
}
- // 2. An object is not scalar replaceable if the field into which it is
- // stored has multiple bases one of which is null.
- if (field->base_count() > 1) {
- for (BaseIterator i(field); i.has_next(); i.next()) {
- PointsToNode* base = i.get();
- if (base == null_obj) {
- jobj->set_scalar_replaceable(false);
- return;
- }
+ for (BaseIterator i(field); i.has_next(); i.next()) {
+ PointsToNode* base = i.get();
+ // 2. An object is not scalar replaceable if the field into which it is
+ // stored has multiple bases one of which is null.
+ if ((base == null_obj) && (field->base_count() > 1)) {
+ set_not_scalar_replaceable(jobj NOT_PRODUCT(COMMA "is stored into field with potentially null base"));
+ return;
+ }
+ // 2.5. An object is not scalar replaceable if the field into which it is
+ // stored has NSR base.
+ if (!base->scalar_replaceable()) {
+ set_not_scalar_replaceable(jobj NOT_PRODUCT(COMMA "is stored into field with NSR base"));
+ return;
}
}
}
@@ -1935,6 +1955,36 @@
}
}
}
+ }
+}
+
+// Propagate NSR (Not scalar replaceable) state.
+void ConnectionGraph::find_scalar_replaceable_allocs(GrowableArray& jobj_worklist) {
+ int jobj_length = jobj_worklist.length();
+ bool found_nsr_alloc = true;
+ while (found_nsr_alloc) {
+ found_nsr_alloc = false;
+ for (int next = 0; next < jobj_length; ++next) {
+ JavaObjectNode* jobj = jobj_worklist.at(next);
+ for (UseIterator i(jobj); (jobj->scalar_replaceable() && i.has_next()); i.next()) {
+ PointsToNode* use = i.get();
+ if (use->is_Field()) {
+ FieldNode* field = use->as_Field();
+ assert(field->is_oop() && field->scalar_replaceable(), "sanity");
+ assert(field->offset() != Type::OffsetBot, "sanity");
+ for (BaseIterator i(field); i.has_next(); i.next()) {
+ PointsToNode* base = i.get();
+ // An object is not scalar replaceable if the field into which
+ // it is stored has NSR base.
+ if ((base != null_obj) && !base->scalar_replaceable()) {
+ set_not_scalar_replaceable(jobj NOT_PRODUCT(COMMA "is stored into field with NSR base"));
+ found_nsr_alloc = true;
+ break;
+ }
+ }
+ }
+ }
+ }
}
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/escape.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/escape.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/escape.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/escape.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -456,6 +456,9 @@
// Adjust scalar_replaceable state after Connection Graph is built.
void adjust_scalar_replaceable_state(JavaObjectNode* jobj);
+ // Propagate NSR (Not scalar replaceable) state.
+ void find_scalar_replaceable_allocs(GrowableArray& jobj_worklist);
+
// Optimize ideal graph.
void optimize_ideal_graph(GrowableArray& ptr_cmp_worklist,
GrowableArray& storestore_worklist);
@@ -569,6 +572,11 @@
// Compute the escape information
bool compute_escape();
+ void set_not_scalar_replaceable(PointsToNode* ptn NOT_PRODUCT(COMMA const char* reason)) const {
+ ptn->set_scalar_replaceable(false);
+ }
+
+
public:
ConnectionGraph(Compile *C, PhaseIterGVN *igvn);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/idealGraphPrinter.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/idealGraphPrinter.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/idealGraphPrinter.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/idealGraphPrinter.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -377,6 +377,12 @@
print_prop("block", C->cfg()->get_block(0)->_pre_order);
} else {
print_prop("block", block->_pre_order);
+ if (node == block->head()) {
+ if (block->_idom != NULL) {
+ print_prop("idom", block->_idom->_pre_order);
+ }
+ print_prop("dom_depth", block->_dom_depth);
+ }
// Print estimated execution frequency, normalized within a [0,1] range.
buffer[0] = 0;
stringStream freq(buffer, sizeof(buffer) - 1);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/idealKit.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/idealKit.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/idealKit.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/idealKit.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -352,13 +352,14 @@
BasicType bt,
int adr_idx,
bool require_atomic_access,
- MemNode::MemOrd mo) {
+ MemNode::MemOrd mo,
+ LoadNode::ControlDependency control_dependency) {
assert(adr_idx != Compile::AliasIdxTop, "use other make_load factory" );
const TypePtr* adr_type = NULL; // debug-mode-only argument
debug_only(adr_type = C->get_adr_type(adr_idx));
Node* mem = memory(adr_idx);
- Node* ld = LoadNode::make(_gvn, ctl, mem, adr, adr_type, t, bt, mo, LoadNode::DependsOnlyOnTest, require_atomic_access);
+ Node* ld = LoadNode::make(_gvn, ctl, mem, adr, adr_type, t, bt, mo, control_dependency, require_atomic_access);
return transform(ld);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/idealKit.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/idealKit.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/idealKit.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/idealKit.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -220,7 +220,9 @@
const Type* t,
BasicType bt,
int adr_idx,
- bool require_atomic_access = false, MemNode::MemOrd mo = MemNode::unordered);
+ bool require_atomic_access = false,
+ MemNode::MemOrd mo = MemNode::unordered,
+ LoadNode::ControlDependency control_dependency = LoadNode::DependsOnlyOnTest);
// Return the new StoreXNode
Node* store(Node* ctl,
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/ifnode.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/ifnode.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/ifnode.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/ifnode.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -733,6 +733,7 @@
ctrl->in(0)->as_If()->cmpi_folds(igvn, true) &&
// Must compare same value
ctrl->in(0)->in(1)->in(1)->in(1) != NULL &&
+ ctrl->in(0)->in(1)->in(1)->in(1) != igvn->C->top() &&
ctrl->in(0)->in(1)->in(1)->in(1) == in(1)->in(1)->in(1);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/loopPredicate.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/loopPredicate.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/loopPredicate.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/loopPredicate.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -108,9 +108,9 @@
// Otherwise, the continuation projection is set up to be the false
// projection. This code is also used to clone predicates to cloned loops.
ProjNode* PhaseIdealLoop::create_new_if_for_predicate(ProjNode* cont_proj, Node* new_entry,
- Deoptimization::DeoptReason reason, int opcode,
- bool if_cont_is_true_proj, Node_List* old_new,
- UnswitchingAction unswitching_action) {
+ Deoptimization::DeoptReason reason,
+ const int opcode, const bool rewire_uncommon_proj_phi_inputs,
+ const bool if_cont_is_true_proj) {
assert(cont_proj->is_uncommon_trap_if_pattern(reason), "must be a uct if pattern!");
IfNode* iff = cont_proj->in(0)->as_If();
@@ -196,39 +196,25 @@
assert(use->in(0) == rgn, "");
_igvn.rehash_node_delayed(use);
Node* phi_input = use->in(proj_index);
- if (unswitching_action == UnswitchingAction::FastLoopCloning
- && !phi_input->is_CFG() && !phi_input->is_Phi() && get_ctrl(phi_input) == uncommon_proj) {
- // There are some control dependent nodes on the uncommon projection and we are currently copying predicates
- // to the fast loop in loop unswitching (first step, slow loop is processed afterwards). For the fast loop,
- // we need to clone all the data nodes in the chain from the phi ('use') up until the node whose control input
- // is the uncommon_proj. The slow loop can reuse the old data nodes and thus only needs to update the control
- // input to the uncommon_proj (done on the next invocation of this method when 'unswitch_is_slow_loop' is true.
- assert(LoopUnswitching, "sanity check");
- phi_input = clone_data_nodes_for_fast_loop(phi_input, uncommon_proj, if_uct, old_new);
- } else if (unswitching_action == UnswitchingAction::SlowLoopRewiring) {
- // Replace phi input for the old predicate path with TOP as the predicate is dying anyways. This avoids the need
- // to clone the data nodes again for the slow loop.
- assert(LoopUnswitching, "sanity check");
- _igvn.replace_input_of(use, proj_index, C->top());
+
+ if (uncommon_proj->outcnt() > 1 && !phi_input->is_CFG() && !phi_input->is_Phi() && get_ctrl(phi_input) == uncommon_proj) {
+ // There are some control dependent nodes on the uncommon projection. We cannot simply reuse these data nodes.
+ // We either need to rewire them from the old uncommon projection to the newly created uncommon proj (if the old
+ // If is dying) or clone them and update their control (if the old If is not dying).
+ if (rewire_uncommon_proj_phi_inputs) {
+ // Replace phi input for the old uncommon projection with TOP as the If is dying anyways. Reuse the old data
+ // nodes by simply updating control inputs and ctrl.
+ _igvn.replace_input_of(use, proj_index, C->top());
+ set_ctrl_of_nodes_with_same_ctrl(phi_input, uncommon_proj, if_uct);
+ } else {
+ phi_input = clone_nodes_with_same_ctrl(phi_input, uncommon_proj, if_uct);
+ }
}
use->add_req(phi_input);
has_phi = true;
}
}
assert(!has_phi || rgn->req() > 3, "no phis when region is created");
- if (unswitching_action == UnswitchingAction::SlowLoopRewiring) {
- // Rewire the control dependent data nodes for the slow loop from the old to the new uncommon projection.
- assert(uncommon_proj->outcnt() > 1 && old_new == NULL, "sanity");
- for (DUIterator_Fast jmax, j = uncommon_proj->fast_outs(jmax); j < jmax; j++) {
- Node* data = uncommon_proj->fast_out(j);
- if (!data->is_CFG()) {
- _igvn.replace_input_of(data, 0, if_uct);
- set_ctrl(data, if_uct);
- --j;
- --jmax;
- }
- }
- }
if (new_entry == NULL) {
// Attach if_cont to iff
@@ -240,70 +226,98 @@
return if_cont->as_Proj();
}
-// Clone data nodes for the fast loop while creating a new If with create_new_if_for_predicate. Returns the node which is
-// used for the uncommon trap phi input.
-Node* PhaseIdealLoop::clone_data_nodes_for_fast_loop(Node* phi_input, ProjNode* uncommon_proj, Node* if_uct, Node_List* old_new) {
- // Step 1: Clone all nodes on the data chain but do not rewire anything, yet. Keep track of the cloned nodes
- // by using the old_new mapping. This mapping is then used in step 2 to rewire the cloned nodes accordingly.
- DEBUG_ONLY(uint last_idx = C->unique();)
- Unique_Node_List list;
- list.push(phi_input);
- for (uint j = 0; j < list.size(); j++) {
- Node* next = list.at(j);
- Node* clone = next->clone();
- _igvn.register_new_node_with_optimizer(clone);
- old_new->map(next->_idx, clone);
+// Update ctrl and control inputs of all data nodes starting from 'node' to 'new_ctrl' which have 'old_ctrl' as
+// current ctrl.
+void PhaseIdealLoop::set_ctrl_of_nodes_with_same_ctrl(Node* node, ProjNode* old_ctrl, Node* new_ctrl) {
+ Unique_Node_List nodes_with_same_ctrl = find_nodes_with_same_ctrl(node, old_ctrl);
+ for (uint j = 0; j < nodes_with_same_ctrl.size(); j++) {
+ Node* next = nodes_with_same_ctrl[j];
+ if (next->in(0) == old_ctrl) {
+ _igvn.replace_input_of(next, 0, new_ctrl);
+ }
+ set_ctrl(next, new_ctrl);
+ }
+}
+
+// Recursively find all input nodes with the same ctrl.
+Unique_Node_List PhaseIdealLoop::find_nodes_with_same_ctrl(Node* node, const ProjNode* ctrl) {
+ Unique_Node_List nodes_with_same_ctrl;
+ nodes_with_same_ctrl.push(node);
+ for (uint j = 0; j < nodes_with_same_ctrl.size(); j++) {
+ Node* next = nodes_with_same_ctrl[j];
for (uint k = 1; k < next->req(); k++) {
Node* in = next->in(k);
- if (!in->is_Phi() && get_ctrl(in) == uncommon_proj) {
- list.push(in);
+ if (!in->is_Phi() && get_ctrl(in) == ctrl) {
+ nodes_with_same_ctrl.push(in);
}
}
}
+ return nodes_with_same_ctrl;
+}
+
+// Clone all nodes with the same ctrl as 'old_ctrl' starting from 'node' by following its inputs. Rewire the cloned nodes
+// to 'new_ctrl'. Returns the clone of 'node'.
+Node* PhaseIdealLoop::clone_nodes_with_same_ctrl(Node* node, ProjNode* old_ctrl, Node* new_ctrl) {
+ DEBUG_ONLY(uint last_idx = C->unique();)
+ Unique_Node_List nodes_with_same_ctrl = find_nodes_with_same_ctrl(node, old_ctrl);
+ Dict old_new_mapping = clone_nodes(nodes_with_same_ctrl); // Cloned but not rewired, yet
+ rewire_cloned_nodes_to_ctrl(old_ctrl, new_ctrl, nodes_with_same_ctrl, old_new_mapping);
+ Node* clone_phi_input = static_cast(old_new_mapping[node]);
+ assert(clone_phi_input != NULL && clone_phi_input->_idx >= last_idx, "must exist and be a proper clone");
+ return clone_phi_input;
+}
+
+// Clone all the nodes on 'list_to_clone' and return an old->new mapping.
+Dict PhaseIdealLoop::clone_nodes(const Node_List& list_to_clone) {
+ Dict old_new_mapping(cmpkey, hashkey);
+ for (uint i = 0; i < list_to_clone.size(); i++) {
+ Node* next = list_to_clone[i];
+ Node* clone = next->clone();
+ _igvn.register_new_node_with_optimizer(clone);
+ old_new_mapping.Insert(next, clone);
+ }
+ return old_new_mapping;
+}
- // Step 2: All nodes are cloned. Rewire them by using the old_new mapping.
- for (uint j = 0; j < list.size(); j++) {
- Node* next = list.at(j);
- Node* clone = old_new->at(next->_idx);
- assert(clone != NULL && clone->_idx >= last_idx, "must exist and be a proper clone");
- if (next->in(0) == uncommon_proj) {
+// Rewire inputs of the unprocessed cloned nodes (inputs are not updated, yet, and still point to the old nodes) by
+// using the old_new_mapping.
+void PhaseIdealLoop::rewire_cloned_nodes_to_ctrl(const ProjNode* old_ctrl, Node* new_ctrl,
+ const Node_List& nodes_with_same_ctrl, const Dict& old_new_mapping) {
+ for (uint i = 0; i < nodes_with_same_ctrl.size(); i++) {
+ Node* next = nodes_with_same_ctrl[i];
+ Node* clone = static_cast(old_new_mapping[next]);
+ if (next->in(0) == old_ctrl) {
// All data nodes with a control input to the uncommon projection in the chain need to be rewired to the new uncommon
// projection (could not only be the last data node in the chain but also, for example, a DivNode within the chain).
- _igvn.replace_input_of(clone, 0, if_uct);
- set_ctrl(clone, if_uct);
+ _igvn.replace_input_of(clone, 0, new_ctrl);
+ set_ctrl(clone, new_ctrl);
}
+ rewire_inputs_of_clones_to_clones(new_ctrl, clone, old_new_mapping, next);
+ }
+}
- // Rewire the inputs of the cloned nodes to the old nodes to the new clones.
- for (uint k = 1; k < next->req(); k++) {
- Node* in = next->in(k);
- if (!in->is_Phi()) {
- assert(!in->is_CFG(), "must be data node");
- Node* in_clone = old_new->at(in->_idx);
- if (in_clone != NULL) {
- assert(in_clone->_idx >= last_idx, "must be a valid clone");
- _igvn.replace_input_of(clone, k, in_clone);
- set_ctrl(clone, if_uct);
- }
+// Rewire the inputs of the cloned nodes to the old nodes to the new clones.
+void PhaseIdealLoop::rewire_inputs_of_clones_to_clones(Node* new_ctrl, Node* clone, const Dict& old_new_mapping,
+ const Node* next) {
+ for (uint i = 1; i < next->req(); i++) {
+ Node* in = next->in(i);
+ if (!in->is_Phi()) {
+ assert(!in->is_CFG(), "must be data node");
+ Node* in_clone = static_cast(old_new_mapping[in]);
+ if (in_clone != NULL) {
+ _igvn.replace_input_of(clone, i, in_clone);
+ set_ctrl(clone, new_ctrl);
}
}
}
- Node* clone_phi_input = old_new->at(phi_input->_idx);
- assert(clone_phi_input != NULL && clone_phi_input->_idx >= last_idx, "must exist and be a proper clone");
- return clone_phi_input;
}
+
//--------------------------clone_predicate-----------------------
ProjNode* PhaseIdealLoop::clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry,
- Deoptimization::DeoptReason reason, Node_List* old_new) {
- UnswitchingAction unswitching_action;
- if (predicate_proj->other_if_proj()->outcnt() > 1) {
- // There are some data dependencies that need to be taken care of when cloning a predicate.
- unswitching_action = old_new == NULL ? UnswitchingAction::SlowLoopRewiring : UnswitchingAction::FastLoopCloning;
- } else {
- unswitching_action = UnswitchingAction::None;
- }
+ Deoptimization::DeoptReason reason, const bool slow_loop) {
ProjNode* new_predicate_proj = create_new_if_for_predicate(predicate_proj, new_entry, reason, Op_If,
- true, old_new, unswitching_action);
+ slow_loop);
IfNode* iff = new_predicate_proj->in(0)->as_If();
Node* ctrl = iff->in(0);
@@ -402,7 +416,8 @@
Deoptimization::DeoptReason reason,
ProjNode* output_proj) {
Node* bol = clone_skeleton_predicate_bool(iff, NULL, NULL, output_proj);
- ProjNode* proj = create_new_if_for_predicate(output_proj, NULL, reason, iff->Opcode(), predicate->is_IfTrue());
+ ProjNode* proj = create_new_if_for_predicate(output_proj, NULL, reason, iff->Opcode(),
+ false, predicate->is_IfTrue());
_igvn.replace_input_of(proj->in(0), 1, bol);
_igvn.replace_input_of(output_proj->in(0), 0, proj);
set_idom(output_proj->in(0), proj, dom_depth(proj));
@@ -435,8 +450,8 @@
}
if (predicate_proj != NULL) { // right pattern that can be used by loop predication
// clone predicate
- iffast_pred = clone_predicate_to_unswitched_loop(predicate_proj, iffast_pred, Deoptimization::Reason_predicate, &old_new);
- ifslow_pred = clone_predicate_to_unswitched_loop(predicate_proj, ifslow_pred, Deoptimization::Reason_predicate);
+ iffast_pred = clone_predicate_to_unswitched_loop(predicate_proj, iffast_pred, Deoptimization::Reason_predicate,false);
+ ifslow_pred = clone_predicate_to_unswitched_loop(predicate_proj, ifslow_pred, Deoptimization::Reason_predicate,true);
clone_skeleton_predicates_to_unswitched_loop(loop, old_new, Deoptimization::Reason_predicate, predicate_proj, iffast_pred, ifslow_pred);
check_created_predicate_for_unswitching(iffast_pred);
@@ -444,8 +459,8 @@
}
if (profile_predicate_proj != NULL) { // right pattern that can be used by loop predication
// clone predicate
- iffast_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, iffast_pred, Deoptimization::Reason_profile_predicate, &old_new);
- ifslow_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, ifslow_pred, Deoptimization::Reason_profile_predicate);
+ iffast_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, iffast_pred,Deoptimization::Reason_profile_predicate, false);
+ ifslow_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, ifslow_pred,Deoptimization::Reason_profile_predicate, true);
clone_skeleton_predicates_to_unswitched_loop(loop, old_new, Deoptimization::Reason_profile_predicate, profile_predicate_proj, iffast_pred, ifslow_pred);
check_created_predicate_for_unswitching(iffast_pred);
@@ -455,8 +470,8 @@
// Clone loop limit check last to insert it before loop.
// Don't clone a limit check which was already finalized
// for this counted loop (only one limit check is needed).
- iffast_pred = clone_predicate_to_unswitched_loop(limit_check_proj, iffast_pred, Deoptimization::Reason_loop_limit_check, &old_new);
- ifslow_pred = clone_predicate_to_unswitched_loop(limit_check_proj, ifslow_pred, Deoptimization::Reason_loop_limit_check);
+ iffast_pred = clone_predicate_to_unswitched_loop(limit_check_proj, iffast_pred,Deoptimization::Reason_loop_limit_check, false);
+ ifslow_pred = clone_predicate_to_unswitched_loop(limit_check_proj, ifslow_pred,Deoptimization::Reason_loop_limit_check, true);
check_created_predicate_for_unswitching(iffast_pred);
check_created_predicate_for_unswitching(ifslow_pred);
@@ -1328,13 +1343,11 @@
upper_bound_iff->set_req(1, upper_bound_bol);
if (TraceLoopPredicate) tty->print_cr("upper bound check if: %s %d ", negate ? " negated" : "", lower_bound_iff->_idx);
- // Fall through into rest of the clean up code which will move
- // any dependent nodes onto the upper bound test.
- new_predicate_proj = upper_bound_proj;
-
- if (iff->is_RangeCheck()) {
- new_predicate_proj = insert_initial_skeleton_predicate(iff, loop, proj, predicate_proj, upper_bound_proj, scale, offset, init, limit, stride, rng, overflow, reason);
- }
+ // Fall through into rest of the cleanup code which will move any dependent nodes to the skeleton predicates of the
+ // upper bound test. We always need to create skeleton predicates in order to properly remove dead loops when later
+ // splitting the predicated loop into (unreachable) sub-loops (i.e. done by unrolling, peeling, pre/main/post etc.).
+ new_predicate_proj = insert_initial_skeleton_predicate(iff, loop, proj, predicate_proj, upper_bound_proj, scale,
+ offset, init, limit, stride, rng, overflow, reason);
#ifndef PRODUCT
if (TraceLoopOpts && !TraceLoopPredicate) {
@@ -1420,7 +1433,7 @@
}
LoopNode* head = loop->_head->as_Loop();
- if (head->unique_ctrl_out()->Opcode() == Op_NeverBranch) {
+ if (head->unique_ctrl_out()->is_NeverBranch()) {
// do nothing for infinite loops
return false;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/loopTransform.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/loopTransform.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/loopTransform.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/loopTransform.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -3396,29 +3396,21 @@
}
// Replace the phi at loop head with the final value of the last
- // iteration. Then the CountedLoopEnd will collapse (backedge never
- // taken) and all loop-invariant uses of the exit values will be correct.
- Node *phi = cl->phi();
- Node *exact_limit = phase->exact_limit(this);
- if (exact_limit != cl->limit()) {
- // We also need to replace the original limit to collapse loop exit.
- Node* cmp = cl->loopexit()->cmp_node();
- assert(cl->limit() == cmp->in(2), "sanity");
- // Duplicate cmp node if it has other users
- if (cmp->outcnt() > 1) {
- cmp = cmp->clone();
- cmp = phase->_igvn.register_new_node_with_optimizer(cmp);
- BoolNode *bol = cl->loopexit()->in(CountedLoopEndNode::TestValue)->as_Bool();
- phase->_igvn.replace_input_of(bol, 1, cmp); // put bol on worklist
- }
- phase->_igvn._worklist.push(cmp->in(2)); // put limit on worklist
- phase->_igvn.replace_input_of(cmp, 2, exact_limit); // put cmp on worklist
- }
+ // iteration (exact_limit - stride), to make sure the loop exit value
+ // is correct, for any users after the loop.
// Note: the final value after increment should not overflow since
// counted loop has limit check predicate.
- Node *final = new SubINode(exact_limit, cl->stride());
- phase->register_new_node(final,cl->in(LoopNode::EntryControl));
- phase->_igvn.replace_node(phi,final);
+ Node* phi = cl->phi();
+ Node* exact_limit = phase->exact_limit(this);
+ Node* final_iv = new SubINode(exact_limit, cl->stride());
+ phase->register_new_node(final_iv, cl->in(LoopNode::EntryControl));
+ phase->_igvn.replace_node(phi, final_iv);
+
+ // Set loop-exit condition to false. Then the CountedLoopEnd will collapse,
+ // because the back edge is never taken.
+ Node* zero = phase->_igvn.intcon(0);
+ phase->_igvn.replace_input_of(cl->loopexit(), CountedLoopEndNode::TestValue, zero);
+
phase->C->set_major_progress();
return true;
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/loopnode.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/loopnode.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/loopnode.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/loopnode.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -3409,17 +3409,6 @@
return 0;
}
-#ifdef ASSERT
-bool IdealLoopTree::has_reduction_nodes() const {
- for (uint i = 0; i < _body.size(); i++) {
- if (_body[i]->is_reduction()) {
- return true;
- }
- }
- return false;
-}
-#endif // ASSERT
-
#ifndef PRODUCT
//------------------------------dump_head--------------------------------------
// Dump 1 liner for loop header info
@@ -3704,24 +3693,40 @@
}
#ifdef ASSERT
+// Goes over all children of the root of the loop tree. Check if any of them have a path
+// down to Root, that does not go via a NeverBranch exit.
bool PhaseIdealLoop::only_has_infinite_loops() {
+ ResourceMark rm;
+ Unique_Node_List worklist;
+ // start traversal at all loop heads of first-level loops
for (IdealLoopTree* l = _ltree_root->_child; l != NULL; l = l->_next) {
- uint i = 1;
- for (; i < C->root()->req(); i++) {
- Node* in = C->root()->in(i);
- if (in != NULL &&
- in->Opcode() == Op_Halt &&
- in->in(0)->is_Proj() &&
- in->in(0)->in(0)->Opcode() == Op_NeverBranch &&
- in->in(0)->in(0)->in(0) == l->_head) {
- break;
- }
- }
- if (i == C->root()->req()) {
+ Node* head = l->_head;
+ assert(head->is_Region(), "");
+ worklist.push(head);
+ }
+ // BFS traversal down the CFG, except through NeverBranch exits
+ for (uint i = 0; i < worklist.size(); ++i) {
+ Node* n = worklist.at(i);
+ assert(n->is_CFG(), "only traverse CFG");
+ if (n->is_Root()) {
+ // Found root -> there was an exit!
return false;
+ } else if (n->is_NeverBranch()) {
+ // Only follow the loop-internal projection, not the NeverBranch exit
+ ProjNode* proj = n->as_NeverBranch()->proj_out_or_null(0);
+ assert(proj != nullptr, "must find loop-internal projection of NeverBranch");
+ worklist.push(proj);
+ } else {
+ // Traverse all CFG outputs
+ for (DUIterator_Fast imax, i = n->fast_outs(imax); i < imax; i++) {
+ Node* use = n->fast_out(i);
+ if (use->is_CFG()) {
+ worklist.push(use);
+ }
+ }
}
}
-
+ // No exit found for any loop -> all are infinite
return true;
}
#endif
@@ -3736,6 +3741,8 @@
bool do_split_ifs = (_mode == LoopOptsDefault);
bool skip_loop_opts = (_mode == LoopOptsNone);
+ bool do_max_unroll = (_mode == LoopOptsMaxUnroll);
+
int old_progress = C->major_progress();
uint orig_worklist_size = _igvn._worklist.size();
@@ -3805,8 +3812,8 @@
BarrierSetC2* bs = BarrierSet::barrier_set()->barrier_set_c2();
// Nothing to do, so get out
- bool stop_early = !C->has_loops() && !skip_loop_opts && !do_split_ifs && !_verify_me && !_verify_only &&
- !bs->is_gc_specific_loop_opts_pass(_mode);
+ bool stop_early = !C->has_loops() && !skip_loop_opts && !do_split_ifs && !do_max_unroll && !_verify_me &&
+ !_verify_only && !bs->is_gc_specific_loop_opts_pass(_mode);
bool do_expensive_nodes = C->should_optimize_expensive_nodes(_igvn);
bool strip_mined_loops_expanded = bs->strip_mined_loops_expanded(_mode);
if (stop_early && !do_expensive_nodes) {
@@ -3945,7 +3952,7 @@
return;
}
- if (_mode == LoopOptsMaxUnroll) {
+ if (do_max_unroll) {
for (LoopTreeIterator iter(_ltree_root); !iter.done(); iter.next()) {
IdealLoopTree* lpt = iter.current();
if (lpt->is_innermost() && lpt->_allow_optimizations && !lpt->_has_call && lpt->is_counted()) {
@@ -5418,6 +5425,11 @@
}
}
}
+ // Don't extend live ranges of raw oops
+ if (least != early && n->is_ConstraintCast() && n->in(1)->bottom_type()->isa_rawptr() &&
+ !n->bottom_type()->isa_rawptr()) {
+ least = early;
+ }
#ifdef ASSERT
// If verifying, verify that 'verify_me' has a legal location
@@ -5515,80 +5527,88 @@
}
}
}
- tty->cr();
- tty->print_cr("idoms of early %d:", early->_idx);
- dump_idom(early);
- tty->cr();
- tty->print_cr("idoms of (wrong) LCA %d:", LCA->_idx);
- dump_idom(LCA);
- tty->cr();
- dump_real_LCA(early, LCA);
+ dump_idoms(early, LCA);
tty->cr();
}
-// Find the real LCA of early and the wrongly assumed LCA.
-void PhaseIdealLoop::dump_real_LCA(Node* early, Node* wrong_lca) {
- assert(!is_dominator(early, wrong_lca) && !is_dominator(early, wrong_lca),
- "sanity check that one node does not dominate the other");
- assert(!has_ctrl(early) && !has_ctrl(wrong_lca), "sanity check, no data nodes");
+// Class to compute the real LCA given an early node and a wrong LCA in a bad graph.
+class RealLCA {
+ const PhaseIdealLoop* _phase;
+ Node* _early;
+ Node* _wrong_lca;
+ uint _early_index;
+ int _wrong_lca_index;
+
+ // Given idom chains of early and wrong LCA: Walk through idoms starting at StartNode and find the first node which
+ // is different: Return the previously visited node which must be the real LCA.
+ // The node lists also contain _early and _wrong_lca, respectively.
+ Node* find_real_lca(Unique_Node_List& early_with_idoms, Unique_Node_List& wrong_lca_with_idoms) {
+ int early_index = early_with_idoms.size() - 1;
+ int wrong_lca_index = wrong_lca_with_idoms.size() - 1;
+ bool found_difference = false;
+ do {
+ if (early_with_idoms[early_index] != wrong_lca_with_idoms[wrong_lca_index]) {
+ // First time early and wrong LCA idoms differ. Real LCA must be at the previous index.
+ found_difference = true;
+ break;
+ }
+ early_index--;
+ wrong_lca_index--;
+ } while (wrong_lca_index >= 0);
+
+ assert(early_index >= 0, "must always find an LCA - cannot be early");
+ _early_index = early_index;
+ _wrong_lca_index = wrong_lca_index;
+ Node* real_lca = early_with_idoms[_early_index + 1]; // Plus one to skip _early.
+ assert(found_difference || real_lca == _wrong_lca, "wrong LCA dominates early and is therefore the real LCA");
+ return real_lca;
+ }
- ResourceMark rm;
- Node_List nodes_seen;
- Node* real_LCA = NULL;
- Node* n1 = wrong_lca;
- Node* n2 = early;
- uint count_1 = 0;
- uint count_2 = 0;
- // Add early and wrong_lca to simplify calculation of idom indices
- nodes_seen.push(n1);
- nodes_seen.push(n2);
-
- // Walk the idom chain up from early and wrong_lca and stop when they intersect.
- while (!n1->is_Start() && !n2->is_Start()) {
- n1 = idom(n1);
- n2 = idom(n2);
- if (n1 == n2) {
- // Both idom chains intersect at the same index
- real_LCA = n1;
- count_1 = nodes_seen.size() / 2;
- count_2 = count_1;
- break;
- }
- if (check_idom_chains_intersection(n1, count_1, count_2, &nodes_seen)) {
- real_LCA = n1;
- break;
- }
- if (check_idom_chains_intersection(n2, count_2, count_1, &nodes_seen)) {
- real_LCA = n2;
- break;
+ void dump(Node* real_lca) {
+ tty->cr();
+ tty->print_cr("idoms of early \"%d %s\":", _early->_idx, _early->Name());
+ _phase->dump_idom(_early, _early_index + 1);
+
+ tty->cr();
+ tty->print_cr("idoms of (wrong) LCA \"%d %s\":", _wrong_lca->_idx, _wrong_lca->Name());
+ _phase->dump_idom(_wrong_lca, _wrong_lca_index + 1);
+
+ tty->cr();
+ tty->print("Real LCA of early \"%d %s\" (idom[%d]) and wrong LCA \"%d %s\"",
+ _early->_idx, _early->Name(), _early_index, _wrong_lca->_idx, _wrong_lca->Name());
+ if (_wrong_lca_index >= 0) {
+ tty->print(" (idom[%d])", _wrong_lca_index);
}
- nodes_seen.push(n1);
- nodes_seen.push(n2);
+ tty->print_cr(":");
+ real_lca->dump();
}
- assert(real_LCA != NULL, "must always find an LCA");
- tty->print_cr("Real LCA of early %d (idom[%d]) and (wrong) LCA %d (idom[%d]):", early->_idx, count_2, wrong_lca->_idx, count_1);
- real_LCA->dump();
-}
-
-// Check if n is already on nodes_seen (i.e. idom chains of early and wrong_lca intersect at n). Determine the idom index of n
-// on both idom chains and return them in idom_idx_new and idom_idx_other, respectively.
-bool PhaseIdealLoop::check_idom_chains_intersection(const Node* n, uint& idom_idx_new, uint& idom_idx_other, const Node_List* nodes_seen) const {
- if (nodes_seen->contains(n)) {
- // The idom chain has just discovered n.
- // Divide by 2 because nodes_seen contains the same amount of nodes from both chains.
- idom_idx_new = nodes_seen->size() / 2;
-
- // The other chain already contained n. Search the index.
- for (uint i = 0; i < nodes_seen->size(); i++) {
- if (nodes_seen->at(i) == n) {
- // Divide by 2 because nodes_seen contains the same amount of nodes from both chains.
- idom_idx_other = i / 2;
- }
- }
- return true;
+ public:
+ RealLCA(const PhaseIdealLoop* phase, Node* early, Node* wrong_lca)
+ : _phase(phase), _early(early), _wrong_lca(wrong_lca), _early_index(0), _wrong_lca_index(0) {
+ assert(!wrong_lca->is_Start(), "StartNode is always a common dominator");
}
- return false;
+
+ void compute_and_dump() {
+ ResourceMark rm;
+ Unique_Node_List early_with_idoms;
+ Unique_Node_List wrong_lca_with_idoms;
+ early_with_idoms.push(_early);
+ wrong_lca_with_idoms.push(_wrong_lca);
+ _phase->get_idoms(_early, 10000, early_with_idoms);
+ _phase->get_idoms(_wrong_lca, 10000, wrong_lca_with_idoms);
+ Node* real_lca = find_real_lca(early_with_idoms, wrong_lca_with_idoms);
+ dump(real_lca);
+ }
+};
+
+// Dump the idom chain of early, of the wrong LCA and dump the real LCA of early and wrong LCA.
+void PhaseIdealLoop::dump_idoms(Node* early, Node* wrong_lca) {
+ assert(!is_dominator(early, wrong_lca), "sanity check that early does not dominate wrong lca");
+ assert(!has_ctrl(early) && !has_ctrl(wrong_lca), "sanity check, no data nodes");
+
+ RealLCA real_lca(this, early, wrong_lca);
+ real_lca.compute_and_dump();
}
#endif // ASSERT
@@ -5661,16 +5681,38 @@
}
}
-void PhaseIdealLoop::dump_idom(Node* n) const {
+void PhaseIdealLoop::dump_idom(Node* n, const uint count) const {
if (has_ctrl(n)) {
tty->print_cr("No idom for data nodes");
} else {
- for (int i = 0; i < 100 && !n->is_Start(); i++) {
- tty->print("idom[%d] ", i);
- n->dump();
- n = idom(n);
+ ResourceMark rm;
+ Unique_Node_List idoms;
+ get_idoms(n, count, idoms);
+ dump_idoms_in_reverse(n, idoms);
+ }
+}
+
+void PhaseIdealLoop::get_idoms(Node* n, const uint count, Unique_Node_List& idoms) const {
+ Node* next = n;
+ for (uint i = 0; !next->is_Start() && i < count; i++) {
+ next = idom(next);
+ assert(!idoms.member(next), "duplicated idom is not possible");
+ idoms.push(next);
+ }
+}
+
+void PhaseIdealLoop::dump_idoms_in_reverse(const Node* n, const Node_List& idom_list) const {
+ Node* next;
+ uint padding = 3;
+ uint node_index_padding_width = static_cast(log10(C->unique())) + 1;
+ for (int i = idom_list.size() - 1; i >= 0; i--) {
+ if (i == 9 || i == 99) {
+ padding++;
}
+ next = idom_list[i];
+ tty->print_cr("idom[%d]:%*c%*d %s", i, padding, ' ', node_index_padding_width, next->_idx, next->Name());
}
+ tty->print_cr("n: %*c%*d %s", padding, ' ', node_index_padding_width, n->_idx, n->Name());
}
#endif // NOT PRODUCT
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/loopnode.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/loopnode.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/loopnode.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/loopnode.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -777,11 +777,6 @@
void remove_main_post_loops(CountedLoopNode *cl, PhaseIdealLoop *phase);
-#ifdef ASSERT
- // Tell whether the body contains nodes marked as reductions.
- bool has_reduction_nodes() const;
-#endif // ASSERT
-
#ifndef PRODUCT
void dump_head() const; // Dump loop head only
void dump() const; // Dump this loop recursively
@@ -1285,21 +1280,22 @@
// Return true if exp is a scaled induction var plus (or minus) constant
bool is_scaled_iv_plus_offset(Node* exp, Node* iv, int* p_scale, Node** p_offset, int depth = 0);
- // Enum to determine the action to be performed in create_new_if_for_predicate() when processing phis of UCT regions.
- enum class UnswitchingAction {
- None, // No special action.
- FastLoopCloning, // Need to clone nodes for the fast loop.
- SlowLoopRewiring // Need to rewire nodes for the slow loop.
- };
-
// Create a new if above the uncommon_trap_if_pattern for the predicate to be promoted
ProjNode* create_new_if_for_predicate(ProjNode* cont_proj, Node* new_entry, Deoptimization::DeoptReason reason,
- int opcode, bool if_cont_is_true_proj = true, Node_List* old_new = NULL,
- UnswitchingAction unswitching_action = UnswitchingAction::None);
+ int opcode, bool rewire_uncommon_proj_phi_inputs = false,
+ bool if_cont_is_true_proj = true);
- // Clone data nodes for the fast loop while creating a new If with create_new_if_for_predicate.
- Node* clone_data_nodes_for_fast_loop(Node* phi_input, ProjNode* uncommon_proj, Node* if_uct, Node_List* old_new);
+ private:
+ // Helper functions for create_new_if_for_predicate()
+ void set_ctrl_of_nodes_with_same_ctrl(Node* node, ProjNode* old_ctrl, Node* new_ctrl);
+ Unique_Node_List find_nodes_with_same_ctrl(Node* node, const ProjNode* ctrl);
+ Node* clone_nodes_with_same_ctrl(Node* node, ProjNode* old_ctrl, Node* new_ctrl);
+ Dict clone_nodes(const Node_List& list_to_clone);
+ void rewire_cloned_nodes_to_ctrl(const ProjNode* old_ctrl, Node* new_ctrl, const Node_List& nodes_with_same_ctrl,
+ const Dict& old_new_mapping);
+ void rewire_inputs_of_clones_to_clones(Node* new_ctrl, Node* clone, const Dict& old_new_mapping, const Node* next);
+ public:
void register_control(Node* n, IdealLoopTree *loop, Node* pred, bool update_body = true);
static Node* skip_all_loop_predicates(Node* entry);
@@ -1586,8 +1582,8 @@
// Clone loop predicates to slow and fast loop when unswitching a loop
void clone_predicates_to_unswitched_loop(IdealLoopTree* loop, Node_List& old_new, ProjNode*& iffast_pred, ProjNode*& ifslow_pred);
- ProjNode* clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry, Deoptimization::DeoptReason reason,
- Node_List* old_new = NULL);
+ ProjNode* clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry,
+ Deoptimization::DeoptReason reason, bool slow_loop);
void clone_skeleton_predicates_to_unswitched_loop(IdealLoopTree* loop, const Node_List& old_new, Deoptimization::DeoptReason reason,
ProjNode* old_predicate_proj, ProjNode* iffast_pred, ProjNode* ifslow_pred);
ProjNode* clone_skeleton_predicate_for_unswitched_loops(Node* iff, ProjNode* predicate,
@@ -1596,10 +1592,8 @@
static void check_created_predicate_for_unswitching(const Node* new_entry) PRODUCT_RETURN;
bool _created_loop_node;
-#ifdef ASSERT
- void dump_real_LCA(Node* early, Node* wrong_lca);
- bool check_idom_chains_intersection(const Node* n, uint& idom_idx_new, uint& idom_idx_other, const Node_List* nodes_seen) const;
-#endif
+ DEBUG_ONLY(void dump_idoms(Node* early, Node* wrong_lca);)
+ NOT_PRODUCT(void dump_idoms_in_reverse(const Node* n, const Node_List& idom_list) const;)
public:
void set_created_loop_node() { _created_loop_node = true; }
@@ -1612,7 +1606,9 @@
#ifndef PRODUCT
void dump() const;
- void dump_idom(Node* n) const;
+ void dump_idom(Node* n) const { dump_idom(n, 1000); } // For debugging
+ void dump_idom(Node* n, uint count) const;
+ void get_idoms(Node* n, uint count, Unique_Node_List& idoms) const;
void dump(IdealLoopTree* loop, uint rpo_idx, Node_List &rpo_list) const;
void verify() const; // Major slow :-)
void verify_compare(Node* n, const PhaseIdealLoop* loop_verify, VectorSet &visited) const;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/loopopts.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/loopopts.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/loopopts.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/loopopts.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -774,8 +774,8 @@
for (DUIterator_Fast imax, i = m->fast_outs(imax); i < imax; i++) {
Node* u = m->fast_out(i);
if (u->is_CFG()) {
- if (u->Opcode() == Op_NeverBranch) {
- u = ((NeverBranchNode*)u)->proj_out(0);
+ if (u->is_NeverBranch()) {
+ u = u->as_NeverBranch()->proj_out(0);
enqueue_cfg_uses(u, wq);
} else {
wq.push(u);
@@ -958,7 +958,7 @@
#endif
lca = place_outside_loop(lca, n_loop);
assert(!n_loop->is_member(get_loop(lca)), "control must not be back in the loop");
- assert(get_loop(lca)->_nest < n_loop->_nest || lca->in(0)->Opcode() == Op_NeverBranch, "must not be moved into inner loop");
+ assert(get_loop(lca)->_nest < n_loop->_nest || lca->in(0)->is_NeverBranch(), "must not be moved into inner loop");
// Move store out of the loop
_igvn.replace_node(hook, n->in(MemNode::Memory));
@@ -1159,7 +1159,7 @@
Node* dom = idom(useblock);
if (loop->is_member(get_loop(dom)) ||
// NeverBranch nodes are not assigned to the loop when constructed
- (dom->Opcode() == Op_NeverBranch && loop->is_member(get_loop(dom->in(0))))) {
+ (dom->is_NeverBranch() && loop->is_member(get_loop(dom->in(0))))) {
break;
}
useblock = dom;
@@ -1467,16 +1467,12 @@
// like various versions of induction variable+offset. Clone the
// computation per usage to allow it to sink out of the loop.
void PhaseIdealLoop::try_sink_out_of_loop(Node* n) {
- bool is_raw_to_oop_cast = n->is_ConstraintCast() &&
- n->in(1)->bottom_type()->isa_rawptr() &&
- !n->bottom_type()->isa_rawptr();
if (has_ctrl(n) &&
!n->is_Phi() &&
!n->is_Bool() &&
!n->is_Proj() &&
!n->is_MergeMem() &&
!n->is_CMove() &&
- !is_raw_to_oop_cast && // don't extend live ranges of raw oops
n->Opcode() != Op_Opaque4 &&
!n->is_Type()) {
Node *n_ctrl = get_ctrl(n);
@@ -2045,26 +2041,28 @@
}
}
-static void clone_outer_loop_helper(Node* n, const IdealLoopTree *loop, const IdealLoopTree* outer_loop,
- const Node_List &old_new, Unique_Node_List& wq, PhaseIdealLoop* phase,
- bool check_old_new) {
+static void collect_nodes_in_outer_loop_not_reachable_from_sfpt(Node* n, const IdealLoopTree *loop, const IdealLoopTree* outer_loop,
+ const Node_List &old_new, Unique_Node_List& wq, PhaseIdealLoop* phase,
+ bool check_old_new) {
for (DUIterator_Fast jmax, j = n->fast_outs(jmax); j < jmax; j++) {
Node* u = n->fast_out(j);
assert(check_old_new || old_new[u->_idx] == NULL, "shouldn't have been cloned");
if (!u->is_CFG() && (!check_old_new || old_new[u->_idx] == NULL)) {
Node* c = phase->get_ctrl(u);
IdealLoopTree* u_loop = phase->get_loop(c);
- assert(!loop->is_member(u_loop), "can be in outer loop or out of both loops only");
- if (outer_loop->is_member(u_loop)) {
- wq.push(u);
- } else {
- // nodes pinned with control in the outer loop but not referenced from the safepoint must be moved out of
- // the outer loop too
- Node* u_c = u->in(0);
- if (u_c != NULL) {
- IdealLoopTree* u_c_loop = phase->get_loop(u_c);
- if (outer_loop->is_member(u_c_loop) && !loop->is_member(u_c_loop)) {
- wq.push(u);
+ assert(!loop->is_member(u_loop) || !loop->_body.contains(u), "can be in outer loop or out of both loops only");
+ if (!loop->is_member(u_loop)) {
+ if (outer_loop->is_member(u_loop)) {
+ wq.push(u);
+ } else {
+ // nodes pinned with control in the outer loop but not referenced from the safepoint must be moved out of
+ // the outer loop too
+ Node* u_c = u->in(0);
+ if (u_c != NULL) {
+ IdealLoopTree* u_c_loop = phase->get_loop(u_c);
+ if (outer_loop->is_member(u_c_loop) && !loop->is_member(u_c_loop)) {
+ wq.push(u);
+ }
}
}
}
@@ -2183,12 +2181,17 @@
Unique_Node_List wq;
for (uint i = 0; i < extra_data_nodes.size(); i++) {
Node* old = extra_data_nodes.at(i);
- clone_outer_loop_helper(old, loop, outer_loop, old_new, wq, this, true);
+ collect_nodes_in_outer_loop_not_reachable_from_sfpt(old, loop, outer_loop, old_new, wq, this, true);
+ }
+
+ for (uint i = 0; i < loop->_body.size(); i++) {
+ Node* old = loop->_body.at(i);
+ collect_nodes_in_outer_loop_not_reachable_from_sfpt(old, loop, outer_loop, old_new, wq, this, true);
}
Node* inner_out = sfpt->in(0);
if (inner_out->outcnt() > 1) {
- clone_outer_loop_helper(inner_out, loop, outer_loop, old_new, wq, this, true);
+ collect_nodes_in_outer_loop_not_reachable_from_sfpt(inner_out, loop, outer_loop, old_new, wq, this, true);
}
Node* new_ctrl = cl->outer_loop_exit();
@@ -2199,7 +2202,7 @@
if (n->in(0) != NULL) {
_igvn.replace_input_of(n, 0, new_ctrl);
}
- clone_outer_loop_helper(n, loop, outer_loop, old_new, wq, this, false);
+ collect_nodes_in_outer_loop_not_reachable_from_sfpt(n, loop, outer_loop, old_new, wq, this, false);
}
} else {
Node *newhead = old_new[loop->_head->_idx];
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/macro.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/macro.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/macro.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/macro.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -57,9 +57,6 @@
#if INCLUDE_G1GC
#include "gc/g1/g1ThreadLocalData.hpp"
#endif // INCLUDE_G1GC
-#if INCLUDE_SHENANDOAHGC
-#include "gc/shenandoah/c2/shenandoahBarrierSetC2.hpp"
-#endif
//
@@ -576,6 +573,7 @@
}
if (can_eliminate && res != NULL) {
+ BarrierSetC2 *bs = BarrierSet::barrier_set()->barrier_set_c2();
for (DUIterator_Fast jmax, j = res->fast_outs(jmax);
j < jmax && can_eliminate; j++) {
Node* use = res->fast_out(j);
@@ -592,8 +590,7 @@
for (DUIterator_Fast kmax, k = use->fast_outs(kmax);
k < kmax && can_eliminate; k++) {
Node* n = use->fast_out(k);
- if (!n->is_Store() && n->Opcode() != Op_CastP2X
- SHENANDOAHGC_ONLY(&& (!UseShenandoahGC || !ShenandoahBarrierSetC2::is_shenandoah_wb_pre_call(n))) ) {
+ if (!n->is_Store() && n->Opcode() != Op_CastP2X && !bs->is_gc_pre_barrier_node(n)) {
DEBUG_ONLY(disq_node = n;)
if (n->is_Load() || n->is_LoadStore()) {
NOT_PRODUCT(fail_eliminate = "Field load";)
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/macro.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/macro.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/macro.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/macro.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -213,6 +213,7 @@
Node* intcon(jint con) const { return _igvn.intcon(con); }
Node* longcon(jlong con) const { return _igvn.longcon(con); }
Node* makecon(const Type *t) const { return _igvn.makecon(t); }
+ Node* zerocon(BasicType bt) const { return _igvn.zerocon(bt); }
Node* top() const { return C->top(); }
Node* prefetch_allocation(Node* i_o,
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/memnode.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/memnode.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/memnode.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/memnode.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -823,7 +823,7 @@
//=============================================================================
// Should LoadNode::Ideal() attempt to remove control edges?
bool LoadNode::can_remove_control() const {
- return true;
+ return !has_pinned_control_dependency();
}
uint LoadNode::size_of() const { return sizeof(*this); }
bool LoadNode::cmp( const Node &n ) const
@@ -841,7 +841,17 @@
st->print(" #"); _type->dump_on(st);
}
if (!depends_only_on_test()) {
- st->print(" (does not depend only on test)");
+ st->print(" (does not depend only on test, ");
+ if (control_dependency() == UnknownControl) {
+ st->print("unknown control");
+ } else if (control_dependency() == Pinned) {
+ st->print("pinned");
+ } else if (adr_type() == TypeRawPtr::BOTTOM) {
+ st->print("raw access");
+ } else {
+ st->print("unknown reason");
+ }
+ st->print(")");
}
}
#endif
@@ -1203,9 +1213,16 @@
}
// (This works even when value is a Con, but LoadNode::Value
// usually runs first, producing the singleton type of the Con.)
- return value;
+ if (!has_pinned_control_dependency() || value->is_Con()) {
+ return value;
+ } else {
+ return this;
+ }
}
+ if (has_pinned_control_dependency()) {
+ return this;
+ }
// Search for an existing data phi which was generated before for the same
// instance's field to avoid infinite generation of phis in a loop.
Node *region = mem->in(0);
@@ -1472,7 +1489,12 @@
}
//------------------------------split_through_phi------------------------------
// Split instance or boxed field load through Phi.
-Node *LoadNode::split_through_phi(PhaseGVN *phase) {
+Node* LoadNode::split_through_phi(PhaseGVN* phase) {
+ if (req() > 3) {
+ assert(is_LoadVector() && Opcode() != Op_LoadVector, "load has too many inputs");
+ // LoadVector subclasses such as LoadVectorMasked have extra inputs that the logic below doesn't take into account
+ return NULL;
+ }
Node* mem = in(Memory);
Node* address = in(Address);
const TypeOopPtr *t_oop = phase->type(address)->isa_oopptr();
@@ -1683,6 +1705,9 @@
// If the offset is constant and the base is an object allocation,
// try to hook me up to the exact initializing store.
Node *LoadNode::Ideal(PhaseGVN *phase, bool can_reshape) {
+ if (has_pinned_control_dependency()) {
+ return NULL;
+ }
Node* p = MemNode::Ideal_common(phase, can_reshape);
if (p) return (p == NodeSentinel) ? NULL : p;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/memnode.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/memnode.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/memnode.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/memnode.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -286,7 +286,9 @@
Node* convert_to_reinterpret_load(PhaseGVN& gvn, const Type* rt);
void pin() { _control_dependency = Pinned; }
- bool has_unknown_control_dependency() const { return _control_dependency == UnknownControl; }
+ ControlDependency control_dependency() const { return _control_dependency; }
+ bool has_unknown_control_dependency() const { return _control_dependency == UnknownControl; }
+ bool has_pinned_control_dependency() const { return _control_dependency == Pinned; }
#ifndef PRODUCT
virtual void dump_spec(outputStream *st) const;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/node.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/node.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/node.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/node.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -914,6 +914,7 @@
DEFINE_CLASS_QUERY(Mul)
DEFINE_CLASS_QUERY(Multi)
DEFINE_CLASS_QUERY(MultiBranch)
+ DEFINE_CLASS_QUERY(NeverBranch)
DEFINE_CLASS_QUERY(Opaque1)
DEFINE_CLASS_QUERY(OuterStripMinedLoop)
DEFINE_CLASS_QUERY(OuterStripMinedLoopEnd)
@@ -1070,6 +1071,8 @@
Node* find_similar(int opc);
// Return the unique control out if only one. Null if none or more than one.
+ // Placeholder until 8281732 is backported.
+ Node* unique_ctrl_out_or_null() const { return unique_ctrl_out(); }
Node* unique_ctrl_out() const;
// Set control or add control as precedence edge
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/phaseX.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/phaseX.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/phaseX.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/phaseX.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -423,7 +423,7 @@
worklist->remove_useless_nodes(_useful.member_set());
// Disconnect 'useless' nodes that are adjacent to useful nodes
- C->remove_useless_nodes(_useful);
+ C->disconnect_useless_nodes(_useful, worklist);
}
//=============================================================================
@@ -1740,7 +1740,7 @@
#ifdef ASSERT
static bool ccp_type_widens(const Type* t, const Type* t0) {
- assert(t->meet(t0) == t, "Not monotonic");
+ assert(t->meet(t0) == t->remove_speculative(), "Not monotonic");
switch (t->base() == t0->base() ? t->base() : Type::Top) {
case Type::Int:
assert(t0->isa_int()->_widen <= t->isa_int()->_widen, "widen increases");
@@ -1766,6 +1766,9 @@
Unique_Node_List worklist;
worklist.push(C->root());
+ assert(_root_and_safepoints.size() == 0, "must be empty (unused)");
+ _root_and_safepoints.push(C->root());
+
// Pull from worklist; compute new value; push changes out.
// This loop is the meat of CCP.
while( worklist.size() ) {
@@ -1776,8 +1779,9 @@
n = worklist.pop();
}
if (n->is_SafePoint()) {
- // Keep track of SafePoint nodes for PhaseCCP::transform()
- _safepoints.push(n);
+ // Make sure safepoints are processed by PhaseCCP::transform even if they are
+ // not reachable from the bottom. Otherwise, infinite loops would be removed.
+ _root_and_safepoints.push(n);
}
const Type *t = n->Value(this);
if (t != type(n)) {
@@ -1867,6 +1871,30 @@
}
}
}
+ push_cast_ii(worklist, n, m);
+ }
+ }
+ }
+}
+
+void PhaseCCP::push_if_not_bottom_type(Unique_Node_List& worklist, Node* n) const {
+ if (n->bottom_type() != type(n)) {
+ worklist.push(n);
+ }
+}
+
+// CastII::Value() optimizes CmpI/If patterns if the right input of the CmpI has a constant type. If the CastII input is
+// the same node as the left input into the CmpI node, the type of the CastII node can be improved accordingly. Add the
+// CastII node back to the worklist to re-apply Value() to either not miss this optimization or to undo it because it
+// cannot be applied anymore. We could have optimized the type of the CastII before but now the type of the right input
+// of the CmpI (i.e. 'parent') is no longer constant. The type of the CastII must be widened in this case.
+void PhaseCCP::push_cast_ii(Unique_Node_List& worklist, const Node* parent, const Node* use) const {
+ if (use->Opcode() == Op_CmpI && use->in(2) == parent) {
+ Node* other_cmp_input = use->in(1);
+ for (DUIterator_Fast imax, i = other_cmp_input->fast_outs(imax); i < imax; i++) {
+ Node* cast_ii = other_cmp_input->fast_out(i);
+ if (cast_ii->is_CastII()) {
+ push_if_not_bottom_type(worklist, cast_ii);
}
}
}
@@ -1888,14 +1916,15 @@
Node *new_node = _nodes[n->_idx]; // Check for transformed node
if( new_node != NULL )
return new_node; // Been there, done that, return old answer
- new_node = transform_once(n); // Check for constant
- _nodes.map( n->_idx, new_node ); // Flag as having been cloned
- // Allocate stack of size _nodes.Size()/2 to avoid frequent realloc
- GrowableArray trstack(C->live_nodes() >> 1);
+ assert(n->is_Root(), "traversal must start at root");
+ assert(_root_and_safepoints.member(n), "root (n) must be in list");
- trstack.push(new_node); // Process children of cloned node
+ // Allocate stack of size _nodes.Size()/2 to avoid frequent realloc
+ GrowableArray transform_stack(C->live_nodes() >> 1);
+ Unique_Node_List useful; // track all visited nodes, so that we can remove the complement
+ // Initialize the traversal.
// This CCP pass may prove that no exit test for a loop ever succeeds (i.e. the loop is infinite). In that case,
// the logic below doesn't follow any path from Root to the loop body: there's at least one such path but it's proven
// never taken (its type is TOP). As a consequence the node on the exit path that's input to Root (let's call it n) is
@@ -1903,17 +1932,18 @@
// through the graph from Root, this causes the loop body to never be processed here even when it's not dead (that
// is reachable from Root following its uses). To prevent that issue, transform() starts walking the graph from Root
// and all safepoints.
- for (uint i = 0; i < _safepoints.size(); ++i) {
- Node* nn = _safepoints.at(i);
+ for (uint i = 0; i < _root_and_safepoints.size(); ++i) {
+ Node* nn = _root_and_safepoints.at(i);
Node* new_node = _nodes[nn->_idx];
assert(new_node == NULL, "");
- new_node = transform_once(nn);
- _nodes.map(nn->_idx, new_node);
- trstack.push(new_node);
+ new_node = transform_once(nn); // Check for constant
+ _nodes.map(nn->_idx, new_node); // Flag as having been cloned
+ transform_stack.push(new_node); // Process children of cloned node
+ useful.push(new_node);
}
- while ( trstack.is_nonempty() ) {
- Node *clone = trstack.pop();
+ while (transform_stack.is_nonempty()) {
+ Node* clone = transform_stack.pop();
uint cnt = clone->req();
for( uint i = 0; i < cnt; i++ ) { // For all inputs do
Node *input = clone->in(i);
@@ -1922,15 +1952,34 @@
if( new_input == NULL ) {
new_input = transform_once(input); // Check for constant
_nodes.map( input->_idx, new_input );// Flag as having been cloned
- trstack.push(new_input);
+ transform_stack.push(new_input); // Process children of cloned node
+ useful.push(new_input);
}
assert( new_input == clone->in(i), "insanity check");
}
}
}
- return new_node;
-}
+ // The above transformation might lead to subgraphs becoming unreachable from the
+ // bottom while still being reachable from the top. As a result, nodes in that
+ // subgraph are not transformed and their bottom types are not updated, leading to
+ // an inconsistency between bottom_type() and type(). In rare cases, LoadNodes in
+ // such a subgraph, might be re-enqueued for IGVN indefinitely by MemNode::Ideal_common
+ // because their address type is inconsistent. Therefore, we aggressively remove
+ // all useless nodes here even before PhaseIdealLoop::build_loop_late gets a chance
+ // to remove them anyway.
+ if (C->cached_top_node()) {
+ useful.push(C->cached_top_node());
+ }
+ C->update_dead_node_list(useful);
+ remove_useless_nodes(useful.member_set());
+ _worklist.remove_useless_nodes(useful.member_set());
+ C->disconnect_useless_nodes(useful, &_worklist);
+
+ Node* new_root = _nodes[n->_idx];
+ assert(new_root->is_Root(), "transformed root node must be a root node");
+ return new_root;
+}
//------------------------------transform_once---------------------------------
// For PhaseCCP, transformation is IDENTITY unless Node computed a constant.
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/phaseX.hpp openjdk-17-17.0.7+7/src/hotspot/share/opto/phaseX.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/phaseX.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/phaseX.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -565,9 +565,11 @@
// Phase for performing global Conditional Constant Propagation.
// Should be replaced with combined CCP & GVN someday.
class PhaseCCP : public PhaseIterGVN {
- Unique_Node_List _safepoints;
+ Unique_Node_List _root_and_safepoints;
// Non-recursive. Use analysis to transform single Node.
virtual Node *transform_once( Node *n );
+ void push_if_not_bottom_type(Unique_Node_List& worklist, Node* n) const;
+ void push_cast_ii(Unique_Node_List& worklist, const Node* parent, const Node* use) const;
public:
PhaseCCP( PhaseIterGVN *igvn ); // Compute conditional constants
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/postaloc.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/postaloc.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/postaloc.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/postaloc.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -30,7 +30,7 @@
// See if this register (or pairs, or vector) already contains the value.
static bool register_contains_value(Node* val, OptoReg::Name reg, int n_regs,
- Node_List& value) {
+ const Node_List &value) {
for (int i = 0; i < n_regs; i++) {
OptoReg::Name nreg = OptoReg::add(reg,-i);
if (value[nreg] != val)
@@ -77,7 +77,7 @@
//------------------------------yank-----------------------------------
// Helper function for yank_if_dead
-int PhaseChaitin::yank( Node *old, Block *current_block, Node_List *value, Node_List *regnd ) {
+int PhaseChaitin::yank(Node *old, Block *current_block, Node_List *value, Node_List *regnd) {
int blk_adjust=0;
Block *oldb = _cfg.get_block_for_node(old);
oldb->find_remove(old);
@@ -87,9 +87,10 @@
}
_cfg.unmap_node_from_block(old);
OptoReg::Name old_reg = lrgs(_lrg_map.live_range_id(old)).reg();
- if( regnd && (*regnd)[old_reg]==old ) { // Instruction is currently available?
- value->map(old_reg,NULL); // Yank from value/regnd maps
- regnd->map(old_reg,NULL); // This register's value is now unknown
+ assert(value != NULL || regnd == NULL, "sanity");
+ if (value != NULL && regnd != NULL && regnd->at(old_reg) == old) { // Instruction is currently available?
+ value->map(old_reg, NULL); // Yank from value/regnd maps
+ regnd->map(old_reg, NULL); // This register's value is now unknown
}
return blk_adjust;
}
@@ -161,7 +162,7 @@
// Use the prior value instead of the current value, in an effort to make
// the current value go dead. Return block iterator adjustment, in case
// we yank some instructions from this block.
-int PhaseChaitin::use_prior_register( Node *n, uint idx, Node *def, Block *current_block, Node_List &value, Node_List ®nd ) {
+int PhaseChaitin::use_prior_register( Node *n, uint idx, Node *def, Block *current_block, Node_List *value, Node_List *regnd ) {
// No effect?
if( def == n->in(idx) ) return 0;
// Def is currently dead and can be removed? Do not resurrect
@@ -207,7 +208,7 @@
_post_alloc++;
// Is old def now dead? We successfully yanked a copy?
- return yank_if_dead(old,current_block,&value,®nd);
+ return yank_if_dead(old,current_block,value,regnd);
}
@@ -229,7 +230,7 @@
//------------------------------elide_copy-------------------------------------
// Remove (bypass) copies along Node n, edge k.
-int PhaseChaitin::elide_copy( Node *n, int k, Block *current_block, Node_List &value, Node_List ®nd, bool can_change_regs ) {
+int PhaseChaitin::elide_copy( Node *n, int k, Block *current_block, Node_List *value, Node_List *regnd, bool can_change_regs ) {
int blk_adjust = 0;
uint nk_idx = _lrg_map.live_range_id(n->in(k));
@@ -253,11 +254,14 @@
// Phis and 2-address instructions cannot change registers so easily - their
// outputs must match their input.
- if( !can_change_regs )
+ if (!can_change_regs) {
return blk_adjust; // Only check stupid copies!
-
+ }
// Loop backedges won't have a value-mapping yet
- if( &value == NULL ) return blk_adjust;
+ assert(regnd != NULL || value == NULL, "sanity");
+ if (value == NULL || regnd == NULL) {
+ return blk_adjust;
+ }
// Skip through all copies to the _value_ being used. Do not change from
// int to pointer. This attempts to jump through a chain of copies, where
@@ -273,10 +277,11 @@
// See if it happens to already be in the correct register!
// (either Phi's direct register, or the common case of the name
// never-clobbered original-def register)
- if (register_contains_value(val, val_reg, n_regs, value)) {
- blk_adjust += use_prior_register(n,k,regnd[val_reg],current_block,value,regnd);
- if( n->in(k) == regnd[val_reg] ) // Success! Quit trying
- return blk_adjust;
+ if (register_contains_value(val, val_reg, n_regs, *value)) {
+ blk_adjust += use_prior_register(n,k,regnd->at(val_reg),current_block,value,regnd);
+ if (n->in(k) == regnd->at(val_reg)) {
+ return blk_adjust; // Success! Quit trying
+ }
}
// See if we can skip the copy by changing registers. Don't change from
@@ -304,7 +309,7 @@
if (ignore_self) continue;
}
- Node *vv = value[reg];
+ Node *vv = value->at(reg);
// For scalable register, number of registers may be inconsistent between
// "val_reg" and "reg". For example, when "val" resides in register
// but "reg" is located in stack.
@@ -326,7 +331,7 @@
last = (n_regs-1); // Looking for the last part of a set
}
if ((reg&last) != last) continue; // Wrong part of a set
- if (!register_contains_value(vv, reg, n_regs, value)) continue; // Different value
+ if (!register_contains_value(vv, reg, n_regs, *value)) continue; // Different value
}
if( vv == val || // Got a direct hit?
(t && vv && vv->bottom_type() == t && vv->is_Mach() &&
@@ -334,9 +339,9 @@
assert( !n->is_Phi(), "cannot change registers at a Phi so easily" );
if( OptoReg::is_stack(nk_reg) || // CISC-loading from stack OR
OptoReg::is_reg(reg) || // turning into a register use OR
- regnd[reg]->outcnt()==1 ) { // last use of a spill-load turns into a CISC use
- blk_adjust += use_prior_register(n,k,regnd[reg],current_block,value,regnd);
- if( n->in(k) == regnd[reg] ) // Success! Quit trying
+ regnd->at(reg)->outcnt()==1 ) { // last use of a spill-load turns into a CISC use
+ blk_adjust += use_prior_register(n,k,regnd->at(reg),current_block,value,regnd);
+ if( n->in(k) == regnd->at(reg) ) // Success! Quit trying
return blk_adjust;
} // End of if not degrading to a stack
} // End of if found value in another register
@@ -536,7 +541,7 @@
Block* pb = _cfg.get_block_for_node(block->pred(j));
// Remove copies along phi edges
for (uint k = 1; k < phi_dex; k++) {
- elide_copy(block->get_node(k), j, block, *blk2value[pb->_pre_order], *blk2regnd[pb->_pre_order], false);
+ elide_copy(block->get_node(k), j, block, blk2value[pb->_pre_order], blk2regnd[pb->_pre_order], false);
}
if (blk2value[pb->_pre_order]) { // Have a mapping on this edge?
// See if this predecessor's mappings have been used by everybody
@@ -692,7 +697,7 @@
// Remove copies along input edges
for (k = 1; k < n->req(); k++) {
- j -= elide_copy(n, k, block, value, regnd, two_adr != k);
+ j -= elide_copy(n, k, block, &value, ®nd, two_adr != k);
}
// Unallocated Nodes define no registers
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/opto/superword.cpp openjdk-17-17.0.7+7/src/hotspot/share/opto/superword.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/opto/superword.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/opto/superword.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -2408,11 +2408,6 @@
return false;
}
- // Check that the loop to be vectorized does not have inconsistent reduction
- // information, which would likely lead to a miscompilation.
- assert(!lpt()->has_reduction_nodes() || cl->is_reduction_loop(),
- "non-reduction loop contains reduction nodes");
-
#ifndef PRODUCT
if (TraceLoopOpts) {
tty->print("SuperWord::output ");
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/prims/jvmtiManageCapabilities.cpp openjdk-17-17.0.7+7/src/hotspot/share/prims/jvmtiManageCapabilities.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/prims/jvmtiManageCapabilities.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/prims/jvmtiManageCapabilities.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -326,6 +326,12 @@
|| avail.can_generate_field_modification_events)
{
RewriteFrequentPairs = false;
+#ifdef ZERO
+ // The BytecodeInterpreter is specialized only with RewriteBytecodes
+ // for simplicity. If we want to disable RewriteFrequentPairs, we
+ // need to disable RewriteBytecodes as well.
+ RewriteBytecodes = false;
+#endif
}
// If can_redefine_classes is enabled in the onload phase then we know that the
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/prims/whitebox.cpp openjdk-17-17.0.7+7/src/hotspot/share/prims/whitebox.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/prims/whitebox.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/prims/whitebox.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -703,37 +703,6 @@
os::release_memory((char *)(uintptr_t)addr, size);
WB_END
-WB_ENTRY(jboolean, WB_NMTChangeTrackingLevel(JNIEnv* env))
- // Test that we can downgrade NMT levels but not upgrade them.
- if (MemTracker::tracking_level() == NMT_off) {
- MemTracker::transition_to(NMT_off);
- return MemTracker::tracking_level() == NMT_off;
- } else {
- assert(MemTracker::tracking_level() == NMT_detail, "Should start out as detail tracking");
- MemTracker::transition_to(NMT_summary);
- assert(MemTracker::tracking_level() == NMT_summary, "Should be summary now");
-
- // Can't go to detail once NMT is set to summary.
- MemTracker::transition_to(NMT_detail);
- assert(MemTracker::tracking_level() == NMT_summary, "Should still be summary now");
-
- // Shutdown sets tracking level to minimal.
- MemTracker::shutdown();
- assert(MemTracker::tracking_level() == NMT_minimal, "Should be minimal now");
-
- // Once the tracking level is minimal, we cannot increase to summary.
- // The code ignores this request instead of asserting because if the malloc site
- // table overflows in another thread, it tries to change the code to summary.
- MemTracker::transition_to(NMT_summary);
- assert(MemTracker::tracking_level() == NMT_minimal, "Should still be minimal now");
-
- // Really can never go up to detail, verify that the code would never do this.
- MemTracker::transition_to(NMT_detail);
- assert(MemTracker::tracking_level() == NMT_minimal, "Should still be minimal now");
- return MemTracker::tracking_level() == NMT_minimal;
- }
-WB_END
-
WB_ENTRY(jint, WB_NMTGetHashSize(JNIEnv* env, jobject o))
int hash_size = MallocSiteTable::hash_buckets();
assert(hash_size > 0, "NMT hash_size should be > 0");
@@ -2443,7 +2412,6 @@
{CC"NMTCommitMemory", CC"(JJ)V", (void*)&WB_NMTCommitMemory },
{CC"NMTUncommitMemory", CC"(JJ)V", (void*)&WB_NMTUncommitMemory },
{CC"NMTReleaseMemory", CC"(JJ)V", (void*)&WB_NMTReleaseMemory },
- {CC"NMTChangeTrackingLevel", CC"()Z", (void*)&WB_NMTChangeTrackingLevel},
{CC"NMTGetHashSize", CC"()I", (void*)&WB_NMTGetHashSize },
{CC"NMTNewArena", CC"(J)J", (void*)&WB_NMTNewArena },
{CC"NMTFreeArena", CC"(J)V", (void*)&WB_NMTFreeArena },
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/frame.cpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/frame.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/frame.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/frame.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -506,8 +506,8 @@
for (BasicObjectLock* current = interpreter_frame_monitor_end();
current < interpreter_frame_monitor_begin();
current = next_monitor_in_interpreter_frame(current)) {
- st->print(" - obj [");
- current->obj()->print_value_on(st);
+ st->print(" - obj [%s", current->obj() == nullptr ? "null" : "");
+ if (current->obj() != nullptr) current->obj()->print_value_on(st);
st->print_cr("]");
st->print(" - lock [");
current->lock()->print_on(st, current->obj());
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/globals.hpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/globals.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/globals.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/globals.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -545,7 +545,7 @@
"compression. Otherwise the level must be between 1 and 9.") \
range(0, 9) \
\
- product(ccstr, NativeMemoryTracking, "off", \
+ product(ccstr, NativeMemoryTracking, DEBUG_ONLY("summary") NOT_DEBUG("off"), \
"Native memory tracking options") \
\
product(bool, PrintNMTStatistics, false, DIAGNOSTIC, \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/mutexLocker.cpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/mutexLocker.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/mutexLocker.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/mutexLocker.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -43,6 +43,7 @@
Mutex* CompiledMethod_lock = NULL;
Monitor* SystemDictionary_lock = NULL;
Mutex* SharedDictionary_lock = NULL;
+Monitor* ClassInitError_lock = NULL;
Mutex* Module_lock = NULL;
Mutex* CompiledIC_lock = NULL;
Mutex* InlineCacheBuffer_lock = NULL;
@@ -255,6 +256,7 @@
def(SystemDictionary_lock , PaddedMonitor, leaf, true, _safepoint_check_always);
def(SharedDictionary_lock , PaddedMutex , leaf, true, _safepoint_check_always);
+ def(ClassInitError_lock , PaddedMonitor, leaf+1, true, _safepoint_check_always);
def(Module_lock , PaddedMutex , leaf+2, false, _safepoint_check_always);
def(InlineCacheBuffer_lock , PaddedMutex , leaf, true, _safepoint_check_never);
def(VMStatistic_lock , PaddedMutex , leaf, false, _safepoint_check_always);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/mutexLocker.hpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/mutexLocker.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/mutexLocker.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/mutexLocker.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -35,6 +35,7 @@
extern Mutex* CompiledMethod_lock; // a lock used to guard a compiled method and OSR queues
extern Monitor* SystemDictionary_lock; // a lock on the system dictionary
extern Mutex* SharedDictionary_lock; // a lock on the CDS shared dictionary
+extern Monitor* ClassInitError_lock; // a lock on the class initialization error table
extern Mutex* Module_lock; // a lock on module and package related data structures
extern Mutex* CompiledIC_lock; // a lock used to guard compiled IC patching and access
extern Mutex* InlineCacheBuffer_lock; // a lock used to guard the InlineCacheBuffer
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/objectMonitor.cpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/objectMonitor.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/objectMonitor.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/objectMonitor.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1416,6 +1416,12 @@
"current thread is not owner", false);
}
+static inline bool is_excluded(const Klass* monitor_klass) {
+ assert(monitor_klass != nullptr, "invariant");
+ NOT_JFR_RETURN_(false);
+ JFR_ONLY(return vmSymbols::jfr_chunk_rotation_monitor() == monitor_klass->name());
+}
+
static void post_monitor_wait_event(EventJavaMonitorWait* event,
ObjectMonitor* monitor,
jlong notifier_tid,
@@ -1423,7 +1429,11 @@
bool timedout) {
assert(event != NULL, "invariant");
assert(monitor != NULL, "invariant");
- event->set_monitorClass(monitor->object()->klass());
+ const Klass* monitor_klass = monitor->object()->klass();
+ if (is_excluded(monitor_klass)) {
+ return;
+ }
+ event->set_monitorClass(monitor_klass);
event->set_timeout(timeout);
// Set an address that is 'unique enough', such that events close in
// time and with the same address are likely (but not guaranteed) to
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/os.cpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/os.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/os.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/os.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -38,7 +38,6 @@
#include "logging/log.hpp"
#include "logging/logStream.hpp"
#include "memory/allocation.inline.hpp"
-#include "memory/guardedMemory.hpp"
#include "memory/resourceArea.hpp"
#include "memory/universe.hpp"
#include "oops/compressedOops.inline.hpp"
@@ -83,13 +82,6 @@
int os::_initial_active_processor_count = 0;
os::PageSizes os::_page_sizes;
-#ifndef PRODUCT
-julong os::num_mallocs = 0; // # of calls to malloc/realloc
-julong os::alloc_bytes = 0; // # of bytes allocated
-julong os::num_frees = 0; // # of calls to free
-julong os::free_bytes = 0; // # of bytes freed
-#endif
-
static size_t cur_malloc_words = 0; // current size for MallocMaxTestWords
DEBUG_ONLY(bool os::_mutex_init_done = false;)
@@ -636,30 +628,11 @@
return p;
}
-
-#define paranoid 0 /* only set to 1 if you suspect checking code has bug */
-
-#ifdef ASSERT
-
-static void verify_memory(void* ptr) {
- GuardedMemory guarded(ptr);
- if (!guarded.verify_guards()) {
- LogTarget(Warning, malloc, free) lt;
- ResourceMark rm;
- LogStream ls(lt);
- ls.print_cr("## nof_mallocs = " UINT64_FORMAT ", nof_frees = " UINT64_FORMAT, os::num_mallocs, os::num_frees);
- ls.print_cr("## memory stomp:");
- guarded.print_on(&ls);
- fatal("memory stomping error");
- }
-}
-
-#endif
-
//
// This function supports testing of the malloc out of memory
// condition without really running the system out of memory.
//
+
static bool has_reached_max_malloc_test_peak(size_t alloc_size) {
if (MallocMaxTestWords > 0) {
size_t words = (alloc_size / BytesPerWord);
@@ -672,13 +645,24 @@
return false;
}
+#ifdef ASSERT
+static void check_crash_protection() {
+ assert(!os::ThreadCrashProtection::is_crash_protected(Thread::current_or_null()),
+ "not allowed when crash protection is set");
+}
+static void break_if_ptr_caught(void* ptr) {
+ if (p2i(ptr) == (intptr_t)MallocCatchPtr) {
+ log_warning(malloc, free)("ptr caught: " PTR_FORMAT, p2i(ptr));
+ breakpoint();
+ }
+}
+#endif // ASSERT
+
void* os::malloc(size_t size, MEMFLAGS flags) {
return os::malloc(size, flags, CALLER_PC);
}
void* os::malloc(size_t size, MEMFLAGS memflags, const NativeCallStack& stack) {
- NOT_PRODUCT(inc_stat_counter(&num_mallocs, 1));
- NOT_PRODUCT(inc_stat_counter(&alloc_bytes, size));
#if INCLUDE_NMT
{
@@ -689,62 +673,40 @@
}
#endif
- // Since os::malloc can be called when the libjvm.{dll,so} is
- // first loaded and we don't have a thread yet we must accept NULL also here.
- assert(!os::ThreadCrashProtection::is_crash_protected(Thread::current_or_null()),
- "malloc() not allowed when crash protection is set");
-
- if (size == 0) {
- // return a valid pointer if size is zero
- // if NULL is returned the calling functions assume out of memory.
- size = 1;
- }
+ DEBUG_ONLY(check_crash_protection());
- // NMT support
- NMT_TrackingLevel level = MemTracker::tracking_level();
- size_t nmt_header_size = MemTracker::malloc_header_size(level);
-
- // Check for overflow.
- if (size + nmt_header_size < size) {
- return NULL;
- }
-
-#ifndef ASSERT
- const size_t alloc_size = size + nmt_header_size;
-#else
- const size_t alloc_size = GuardedMemory::get_total_size(size + nmt_header_size);
- if (size + nmt_header_size > alloc_size) { // Check for rollover.
- return NULL;
- }
-#endif
+ // On malloc(0), implementators of malloc(3) have the choice to return either
+ // NULL or a unique non-NULL pointer. To unify libc behavior across our platforms
+ // we chose the latter.
+ size = MAX2((size_t)1, size);
// For the test flag -XX:MallocMaxTestWords
if (has_reached_max_malloc_test_peak(size)) {
return NULL;
}
- u_char* ptr;
- ptr = (u_char*)::malloc(alloc_size);
+ const NMT_TrackingLevel level = MemTracker::tracking_level();
+ const size_t nmt_overhead =
+ MemTracker::malloc_header_size(level) + MemTracker::malloc_footer_size(level);
-#ifdef ASSERT
- if (ptr == NULL) {
+ const size_t outer_size = size + nmt_overhead;
+
+ // Check for overflow.
+ if (outer_size < size) {
return NULL;
}
- // Wrap memory with guard
- GuardedMemory guarded(ptr, size + nmt_header_size);
- ptr = guarded.get_user_ptr();
- if ((intptr_t)ptr == (intptr_t)MallocCatchPtr) {
- log_warning(malloc, free)("os::malloc caught, " SIZE_FORMAT " bytes --> " PTR_FORMAT, size, p2i(ptr));
- breakpoint();
- }
- if (paranoid) {
- verify_memory(ptr);
+ void* const outer_ptr = (u_char*)::malloc(outer_size);
+ if (outer_ptr == NULL) {
+ return NULL;
}
-#endif
- // we do not track guard memory
- return MemTracker::record_malloc((address)ptr, size, memflags, stack, level);
+ void* inner_ptr = MemTracker::record_malloc((address)outer_ptr, size, memflags, stack, level);
+
+ DEBUG_ONLY(::memset(inner_ptr, uninitBlockPad, size);)
+ DEBUG_ONLY(break_if_ptr_caught(inner_ptr);)
+
+ return inner_ptr;
}
void* os::realloc(void *memblock, size_t size, MEMFLAGS flags) {
@@ -762,55 +724,41 @@
}
#endif
+ if (memblock == NULL) {
+ return os::malloc(size, memflags, stack);
+ }
+
+ DEBUG_ONLY(check_crash_protection());
+
+ // On realloc(p, 0), implementators of realloc(3) have the choice to return either
+ // NULL or a unique non-NULL pointer. To unify libc behavior across our platforms
+ // we chose the latter.
+ size = MAX2((size_t)1, size);
+
// For the test flag -XX:MallocMaxTestWords
if (has_reached_max_malloc_test_peak(size)) {
return NULL;
}
- if (size == 0) {
- // return a valid pointer if size is zero
- // if NULL is returned the calling functions assume out of memory.
- size = 1;
- }
+ const NMT_TrackingLevel level = MemTracker::tracking_level();
+ const size_t nmt_overhead =
+ MemTracker::malloc_header_size(level) + MemTracker::malloc_footer_size(level);
-#ifndef ASSERT
- NOT_PRODUCT(inc_stat_counter(&num_mallocs, 1));
- NOT_PRODUCT(inc_stat_counter(&alloc_bytes, size));
- // NMT support
- NMT_TrackingLevel level = MemTracker::tracking_level();
- void* membase = MemTracker::record_free(memblock, level);
- size_t nmt_header_size = MemTracker::malloc_header_size(level);
- void* ptr = ::realloc(membase, size + nmt_header_size);
- return MemTracker::record_malloc(ptr, size, memflags, stack, level);
-#else
- if (memblock == NULL) {
- return os::malloc(size, memflags, stack);
- }
- if ((intptr_t)memblock == (intptr_t)MallocCatchPtr) {
- log_warning(malloc, free)("os::realloc caught " PTR_FORMAT, p2i(memblock));
- breakpoint();
- }
- // NMT support
- void* membase = MemTracker::malloc_base(memblock);
- verify_memory(membase);
- // always move the block
- void* ptr = os::malloc(size, memflags, stack);
- // Copy to new memory if malloc didn't fail
- if (ptr != NULL ) {
- GuardedMemory guarded(MemTracker::malloc_base(memblock));
- // Guard's user data contains NMT header
- size_t memblock_size = guarded.get_user_size() - MemTracker::malloc_header_size(memblock);
- memcpy(ptr, memblock, MIN2(size, memblock_size));
- if (paranoid) {
- verify_memory(MemTracker::malloc_base(ptr));
- }
- os::free(memblock);
- }
- return ptr;
-#endif
+ const size_t new_outer_size = size + nmt_overhead;
+
+ // If NMT is enabled, this checks for heap overwrites, then de-accounts the old block.
+ void* const old_outer_ptr = MemTracker::record_free(memblock, level);
+
+ void* const new_outer_ptr = ::realloc(old_outer_ptr, new_outer_size);
+
+ // If NMT is enabled, this checks for heap overwrites, then de-accounts the old block.
+ void* const new_inner_ptr = MemTracker::record_malloc(new_outer_ptr, size, memflags, stack, level);
+
+ DEBUG_ONLY(break_if_ptr_caught(new_inner_ptr);)
+
+ return new_inner_ptr;
}
-// handles NULL pointers
void os::free(void *memblock) {
#if INCLUDE_NMT
@@ -819,25 +767,17 @@
}
#endif
- NOT_PRODUCT(inc_stat_counter(&num_frees, 1));
-#ifdef ASSERT
- if (memblock == NULL) return;
- if ((intptr_t)memblock == (intptr_t)MallocCatchPtr) {
- log_warning(malloc, free)("os::free caught " PTR_FORMAT, p2i(memblock));
- breakpoint();
+ if (memblock == NULL) {
+ return;
}
- void* membase = MemTracker::record_free(memblock, MemTracker::tracking_level());
- verify_memory(membase);
- GuardedMemory guarded(membase);
- size_t size = guarded.get_user_size();
- inc_stat_counter(&free_bytes, size);
- membase = guarded.release_for_freeing();
- ::free(membase);
-#else
- void* membase = MemTracker::record_free(memblock, MemTracker::tracking_level());
- ::free(membase);
-#endif
+ DEBUG_ONLY(break_if_ptr_caught(memblock);)
+
+ const NMT_TrackingLevel level = MemTracker::tracking_level();
+
+ // If NMT is enabled, this checks for heap overwrites, then de-accounts the old block.
+ void* const old_outer_ptr = MemTracker::record_free(memblock, level);
+ ::free(old_outer_ptr);
}
void os::init_random(unsigned int initval) {
@@ -1815,7 +1755,7 @@
bool os::uncommit_memory(char* addr, size_t bytes, bool executable) {
bool res;
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
Tracker tkr(Tracker::uncommit);
res = pd_uncommit_memory(addr, bytes, executable);
if (res) {
@@ -1829,7 +1769,7 @@
bool os::release_memory(char* addr, size_t bytes) {
bool res;
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
// Note: Tracker contains a ThreadCritical.
Tracker tkr(Tracker::release);
res = pd_release_memory(addr, bytes);
@@ -1898,7 +1838,7 @@
bool os::unmap_memory(char *addr, size_t bytes) {
bool result;
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
Tracker tkr(Tracker::release);
result = pd_unmap_memory(addr, bytes);
if (result) {
@@ -1934,7 +1874,7 @@
bool os::release_memory_special(char* addr, size_t bytes) {
bool res;
- if (MemTracker::tracking_level() > NMT_minimal) {
+ if (MemTracker::enabled()) {
// Note: Tracker contains a ThreadCritical.
Tracker tkr(Tracker::release);
res = pd_release_memory_special(addr, bytes);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/os.hpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/os.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/os.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/os.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -789,13 +789,6 @@
// Like strdup, but exit VM when strdup() returns NULL
static char* strdup_check_oom(const char*, MEMFLAGS flags = mtInternal);
-#ifndef PRODUCT
- static julong num_mallocs; // # of calls to malloc/realloc
- static julong alloc_bytes; // # of bytes allocated
- static julong num_frees; // # of calls to free
- static julong free_bytes; // # of bytes freed
-#endif
-
// SocketInterface (ex HPI SocketInterface )
static int socket(int domain, int type, int protocol);
static int socket_close(int fd);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/sharedRuntime.cpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/sharedRuntime.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/sharedRuntime.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/sharedRuntime.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1946,8 +1946,6 @@
JRT_LEAF(void, SharedRuntime::fixup_callers_callsite(Method* method, address caller_pc))
Method* moop(method);
- address entry_point = moop->from_compiled_entry_no_trampoline();
-
// It's possible that deoptimization can occur at a call site which hasn't
// been resolved yet, in which case this function will be called from
// an nmethod that has been patched for deopt and we can ignore the
@@ -1958,8 +1956,16 @@
// "to interpreter" stub in order to load up the Method*. Don't
// ask me how I know this...
+ // Result from nmethod::is_unloading is not stable across safepoints.
+ NoSafepointVerifier nsv;
+
+ CompiledMethod* callee = moop->code();
+ if (callee == NULL) {
+ return;
+ }
+
CodeBlob* cb = CodeCache::find_blob(caller_pc);
- if (cb == NULL || !cb->is_compiled() || entry_point == moop->get_c2i_entry()) {
+ if (cb == NULL || !cb->is_compiled() || callee->is_unloading()) {
return;
}
@@ -2007,6 +2013,7 @@
return;
}
address destination = call->destination();
+ address entry_point = callee->verified_entry_point();
if (should_fixup_call_destination(destination, entry_point, caller_pc, moop, cb)) {
call->set_destination_mt_safe(entry_point);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/vmOperations.hpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/vmOperations.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/vmOperations.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/vmOperations.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -45,6 +45,10 @@
public:
VMOp_Type type() const { return VMOp_Cleanup; }
void doit() {};
+ virtual bool skip_thread_oop_barriers() const {
+ // None of the safepoint cleanup tasks read oops in the Java threads.
+ return true;
+ }
};
class VM_ClearICs: public VM_Operation {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/runtime/vmStructs.hpp openjdk-17-17.0.7+7/src/hotspot/share/runtime/vmStructs.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/runtime/vmStructs.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/runtime/vmStructs.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -188,13 +188,18 @@
#ifdef ASSERT
// This macro checks the type of a VMStructEntry by comparing pointer types
-#define CHECK_NONSTATIC_VM_STRUCT_ENTRY(typeName, fieldName, type) \
- {typeName *dummyObj = NULL; type* dummy = &dummyObj->fieldName; \
- assert(offset_of(typeName, fieldName) < sizeof(typeName), "Illegal nonstatic struct entry, field offset too large"); }
+#define CHECK_NONSTATIC_VM_STRUCT_ENTRY(typeName, fieldName, type) { \
+ static_assert( \
+ std::is_convertible< \
+ std::add_pointer_t().fieldName)>, \
+ std::add_pointer_t>::value, \
+ "type mismatch for " XSTR(fieldName) " member of " XSTR(typeName)); \
+ assert(offset_of(typeName, fieldName) < sizeof(typeName), "..."); \
+}
// This macro checks the type of a volatile VMStructEntry by comparing pointer types
-#define CHECK_VOLATILE_NONSTATIC_VM_STRUCT_ENTRY(typeName, fieldName, type) \
- {typedef type dummyvtype; typeName *dummyObj = NULL; volatile dummyvtype* dummy = &dummyObj->fieldName; }
+#define CHECK_VOLATILE_NONSTATIC_VM_STRUCT_ENTRY(typeName, fieldName, type) \
+ CHECK_NONSTATIC_VM_STRUCT_ENTRY(typeName, fieldName, std::add_volatile_t)
// This macro checks the type of a static VMStructEntry by comparing pointer types
#define CHECK_STATIC_VM_STRUCT_ENTRY(typeName, fieldName, type) \
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/mallocSiteTable.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/mallocSiteTable.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/mallocSiteTable.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/mallocSiteTable.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -33,13 +33,6 @@
const NativeCallStack* MallocSiteTable::_hash_entry_allocation_stack = NULL;
const MallocSiteHashtableEntry* MallocSiteTable::_hash_entry_allocation_site = NULL;
-// concurrent access counter
-volatile int MallocSiteTable::_access_count = 0;
-
-// Tracking hashtable contention
-NOT_PRODUCT(int MallocSiteTable::_peak_count = 0;)
-
-
/*
* Initialize malloc site table.
* Hashtable entry is malloc'd, so it can cause infinite recursion.
@@ -49,7 +42,6 @@
* time, it is in single-threaded mode from JVM perspective.
*/
bool MallocSiteTable::initialize() {
- assert((size_t)table_size <= MAX_MALLOCSITE_TABLE_SIZE, "Hashtable overflow");
// Fake the call stack for hashtable entry allocation
assert(NMT_TrackingStackDepth > 1, "At least one tracking stack");
@@ -204,122 +196,81 @@
}
}
-void MallocSiteTable::shutdown() {
- AccessLock locker(&_access_count);
- locker.exclusiveLock();
- reset();
-}
-
bool MallocSiteTable::walk_malloc_site(MallocSiteWalker* walker) {
assert(walker != NULL, "NuLL walker");
- AccessLock locker(&_access_count);
- if (locker.sharedLock()) {
- NOT_PRODUCT(_peak_count = MAX2(_peak_count, _access_count);)
- return walk(walker);
- }
- return false;
-}
-
-
-void MallocSiteTable::AccessLock::exclusiveLock() {
- int target;
- int val;
-
- assert(_lock_state != ExclusiveLock, "Can only call once");
- assert(*_lock >= 0, "Can not content exclusive lock");
-
- // make counter negative to block out shared locks
- do {
- val = *_lock;
- target = _MAGIC_ + *_lock;
- } while (Atomic::cmpxchg(_lock, val, target) != val);
-
- // wait for all readers to exit
- while (*_lock != _MAGIC_) {
-#ifdef _WINDOWS
- os::naked_short_sleep(1);
-#else
- os::naked_yield();
-#endif
- }
- _lock_state = ExclusiveLock;
+ return walk(walker);
}
void MallocSiteTable::print_tuning_statistics(outputStream* st) {
-
- AccessLock locker(&_access_count);
- if (locker.sharedLock()) {
- // Total number of allocation sites, include empty sites
- int total_entries = 0;
- // Number of allocation sites that have all memory freed
- int empty_entries = 0;
- // Number of captured call stack distribution
- int stack_depth_distribution[NMT_TrackingStackDepth + 1] = { 0 };
- // Chain lengths
- int lengths[table_size] = { 0 };
-
- for (int i = 0; i < table_size; i ++) {
- int this_chain_length = 0;
- const MallocSiteHashtableEntry* head = _table[i];
- while (head != NULL) {
- total_entries ++;
- this_chain_length ++;
- if (head->size() == 0) {
- empty_entries ++;
- }
- const int callstack_depth = head->peek()->call_stack()->frames();
- assert(callstack_depth >= 0 && callstack_depth <= NMT_TrackingStackDepth,
- "Sanity (%d)", callstack_depth);
- stack_depth_distribution[callstack_depth] ++;
- head = head->next();
- }
- lengths[i] = this_chain_length;
- }
-
- st->print_cr("Malloc allocation site table:");
- st->print_cr("\tTotal entries: %d", total_entries);
- st->print_cr("\tEmpty entries: %d (%2.2f%%)", empty_entries, ((float)empty_entries * 100) / total_entries);
- st->cr();
-
- // We report the hash distribution (chain length distribution) of the n shortest chains
- // - under the assumption that this usually contains all lengths. Reporting threshold
- // is 20, and the expected avg chain length is 5..6 (see table size).
- static const int chain_length_threshold = 20;
- int chain_length_distribution[chain_length_threshold] = { 0 };
- int over_threshold = 0;
- int longest_chain_length = 0;
- for (int i = 0; i < table_size; i ++) {
- if (lengths[i] >= chain_length_threshold) {
- over_threshold ++;
- } else {
- chain_length_distribution[lengths[i]] ++;
+ // Total number of allocation sites, include empty sites
+ int total_entries = 0;
+ // Number of allocation sites that have all memory freed
+ int empty_entries = 0;
+ // Number of captured call stack distribution
+ int stack_depth_distribution[NMT_TrackingStackDepth + 1] = { 0 };
+ // Chain lengths
+ int lengths[table_size] = { 0 };
+
+ for (int i = 0; i < table_size; i ++) {
+ int this_chain_length = 0;
+ const MallocSiteHashtableEntry* head = _table[i];
+ while (head != NULL) {
+ total_entries ++;
+ this_chain_length ++;
+ if (head->size() == 0) {
+ empty_entries ++;
}
- longest_chain_length = MAX2(longest_chain_length, lengths[i]);
+ const int callstack_depth = head->peek()->call_stack()->frames();
+ assert(callstack_depth >= 0 && callstack_depth <= NMT_TrackingStackDepth,
+ "Sanity (%d)", callstack_depth);
+ stack_depth_distribution[callstack_depth] ++;
+ head = head->next();
}
+ lengths[i] = this_chain_length;
+ }
- st->print_cr("Hash distribution:");
- if (chain_length_distribution[0] == 0) {
- st->print_cr("no empty buckets.");
+ st->print_cr("Malloc allocation site table:");
+ st->print_cr("\tTotal entries: %d", total_entries);
+ st->print_cr("\tEmpty entries: %d (%2.2f%%)", empty_entries, ((float)empty_entries * 100) / total_entries);
+ st->cr();
+
+ // We report the hash distribution (chain length distribution) of the n shortest chains
+ // - under the assumption that this usually contains all lengths. Reporting threshold
+ // is 20, and the expected avg chain length is 5..6 (see table size).
+ static const int chain_length_threshold = 20;
+ int chain_length_distribution[chain_length_threshold] = { 0 };
+ int over_threshold = 0;
+ int longest_chain_length = 0;
+ for (int i = 0; i < table_size; i ++) {
+ if (lengths[i] >= chain_length_threshold) {
+ over_threshold ++;
} else {
- st->print_cr("%d buckets are empty.", chain_length_distribution[0]);
- }
- for (int len = 1; len < MIN2(longest_chain_length + 1, chain_length_threshold); len ++) {
- st->print_cr("%2d %s: %d.", len, (len == 1 ? " entry" : "entries"), chain_length_distribution[len]);
+ chain_length_distribution[lengths[i]] ++;
}
- if (longest_chain_length >= chain_length_threshold) {
- st->print_cr(">=%2d entries: %d.", chain_length_threshold, over_threshold);
- }
- st->print_cr("most entries: %d.", longest_chain_length);
- st->cr();
+ longest_chain_length = MAX2(longest_chain_length, lengths[i]);
+ }
- st->print_cr("Call stack depth distribution:");
- for (int i = 0; i <= NMT_TrackingStackDepth; i ++) {
- st->print_cr("\t%d: %d", i, stack_depth_distribution[i]);
- }
- st->cr();
- } // lock
-}
+ st->print_cr("Hash distribution:");
+ if (chain_length_distribution[0] == 0) {
+ st->print_cr("no empty buckets.");
+ } else {
+ st->print_cr("%d buckets are empty.", chain_length_distribution[0]);
+ }
+ for (int len = 1; len < MIN2(longest_chain_length + 1, chain_length_threshold); len ++) {
+ st->print_cr("%2d %s: %d.", len, (len == 1 ? " entry" : "entries"), chain_length_distribution[len]);
+ }
+ if (longest_chain_length >= chain_length_threshold) {
+ st->print_cr(">=%2d entries: %d.", chain_length_threshold, over_threshold);
+ }
+ st->print_cr("most entries: %d.", longest_chain_length);
+ st->cr();
+ st->print_cr("Call stack depth distribution:");
+ for (int i = 0; i <= NMT_TrackingStackDepth; i ++) {
+ st->print_cr("\t%d: %d", i, stack_depth_distribution[i]);
+ }
+ st->cr();
+}
bool MallocSiteHashtableEntry::atomic_insert(MallocSiteHashtableEntry* entry) {
return Atomic::replace_if_null(&_next, entry);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/mallocSiteTable.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/mallocSiteTable.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/mallocSiteTable.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/mallocSiteTable.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -114,55 +114,12 @@
table_size = (table_base_size * NMT_TrackingStackDepth - 1)
};
-
- // This is a very special lock, that allows multiple shared accesses (sharedLock), but
- // once exclusive access (exclusiveLock) is requested, all shared accesses are
- // rejected forever.
- class AccessLock : public StackObj {
- enum LockState {
- NoLock,
- SharedLock,
- ExclusiveLock
- };
-
- private:
- // A very large negative number. The only possibility to "overflow"
- // this number is when there are more than -min_jint threads in
- // this process, which is not going to happen in foreseeable future.
- const static int _MAGIC_ = min_jint;
-
- LockState _lock_state;
- volatile int* _lock;
- public:
- AccessLock(volatile int* lock) :
- _lock_state(NoLock), _lock(lock) {
- }
-
- ~AccessLock() {
- if (_lock_state == SharedLock) {
- Atomic::dec(_lock);
- }
- }
- // Acquire shared lock.
- // Return true if shared access is granted.
- inline bool sharedLock() {
- jint res = Atomic::add(_lock, 1);
- if (res < 0) {
- Atomic::dec(_lock);
- return false;
- }
- _lock_state = SharedLock;
- return true;
- }
- // Acquire exclusive lock
- void exclusiveLock();
- };
+ // The table must not be wider than the maximum value the bucket_idx field
+ // in the malloc header can hold.
+ STATIC_ASSERT(table_size <= MAX_MALLOCSITE_TABLE_SIZE);
public:
static bool initialize();
- static void shutdown();
-
- NOT_PRODUCT(static int access_peak_count() { return _peak_count; })
// Number of hash buckets
static inline int hash_buckets() { return (int)table_size; }
@@ -171,14 +128,10 @@
// acquired before access the entry.
static inline bool access_stack(NativeCallStack& stack, size_t bucket_idx,
size_t pos_idx) {
- AccessLock locker(&_access_count);
- if (locker.sharedLock()) {
- NOT_PRODUCT(_peak_count = MAX2(_peak_count, _access_count);)
- MallocSite* site = malloc_site(bucket_idx, pos_idx);
- if (site != NULL) {
- stack = *site->call_stack();
- return true;
- }
+ MallocSite* site = malloc_site(bucket_idx, pos_idx);
+ if (site != NULL) {
+ stack = *site->call_stack();
+ return true;
}
return false;
}
@@ -192,27 +145,18 @@
// 2. overflow hash bucket
static inline bool allocation_at(const NativeCallStack& stack, size_t size,
size_t* bucket_idx, size_t* pos_idx, MEMFLAGS flags) {
- AccessLock locker(&_access_count);
- if (locker.sharedLock()) {
- NOT_PRODUCT(_peak_count = MAX2(_peak_count, _access_count);)
- MallocSite* site = lookup_or_add(stack, bucket_idx, pos_idx, flags);
- if (site != NULL) site->allocate(size);
- return site != NULL;
- }
- return false;
+ MallocSite* site = lookup_or_add(stack, bucket_idx, pos_idx, flags);
+ if (site != NULL) site->allocate(size);
+ return site != NULL;
}
// Record memory deallocation. bucket_idx and pos_idx indicate where the allocation
// information was recorded.
static inline bool deallocation_at(size_t size, size_t bucket_idx, size_t pos_idx) {
- AccessLock locker(&_access_count);
- if (locker.sharedLock()) {
- NOT_PRODUCT(_peak_count = MAX2(_peak_count, _access_count);)
- MallocSite* site = malloc_site(bucket_idx, pos_idx);
- if (site != NULL) {
- site->deallocate(size);
- return true;
- }
+ MallocSite* site = malloc_site(bucket_idx, pos_idx);
+ if (site != NULL) {
+ site->deallocate(size);
+ return true;
}
return false;
}
@@ -248,17 +192,11 @@
}
private:
- // Counter for counting concurrent access
- static volatile int _access_count;
-
// The callsite hashtable. It has to be a static table,
// since malloc call can come from C runtime linker.
static MallocSiteHashtableEntry* _table[table_size];
static const NativeCallStack* _hash_entry_allocation_stack;
static const MallocSiteHashtableEntry* _hash_entry_allocation_site;
-
-
- NOT_PRODUCT(static int _peak_count;)
};
#endif // INCLUDE_NMT
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/mallocTracker.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/mallocTracker.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/mallocTracker.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/mallocTracker.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -23,10 +23,13 @@
*/
#include "precompiled.hpp"
+#include "runtime/os.hpp"
#include "services/mallocSiteTable.hpp"
#include "services/mallocTracker.hpp"
#include "services/mallocTracker.inline.hpp"
#include "services/memTracker.hpp"
+#include "utilities/debug.hpp"
+#include "utilities/ostream.hpp"
size_t MallocMemorySummary::_snapshot[CALC_OBJ_SIZE_IN_TYPE(MallocMemorySnapshot, size_t)];
@@ -103,28 +106,122 @@
::new ((void*)_snapshot)MallocMemorySnapshot();
}
-void MallocHeader::release() const {
- // Tracking already shutdown, no housekeeping is needed anymore
- if (MemTracker::tracking_level() <= NMT_minimal) return;
+void MallocHeader::mark_block_as_dead() {
+ _canary = _header_canary_dead_mark;
+ NOT_LP64(_alt_canary = _header_alt_canary_dead_mark);
+ set_footer(_footer_canary_dead_mark);
+}
+
+void MallocHeader::release() {
+ assert(MemTracker::enabled(), "Sanity");
+
+ check_block_integrity();
MallocMemorySummary::record_free(size(), flags());
MallocMemorySummary::record_free_malloc_header(sizeof(MallocHeader));
if (MemTracker::tracking_level() == NMT_detail) {
MallocSiteTable::deallocation_at(size(), _bucket_idx, _pos_idx);
}
+
+ mark_block_as_dead();
}
-bool MallocHeader::record_malloc_site(const NativeCallStack& stack, size_t size,
- size_t* bucket_idx, size_t* pos_idx, MEMFLAGS flags) const {
- bool ret = MallocSiteTable::allocation_at(stack, size, bucket_idx, pos_idx, flags);
+void MallocHeader::print_block_on_error(outputStream* st, address bad_address) const {
+ assert(bad_address >= (address)this, "sanity");
- // Something went wrong, could be OOM or overflow malloc site table.
- // We want to keep tracking data under OOM circumstance, so transition to
- // summary tracking.
- if (!ret) {
- MemTracker::transition_to(NMT_summary);
+ // This function prints block information, including hex dump, in case of a detected
+ // corruption. The hex dump should show both block header and corruption site
+ // (which may or may not be close together or identical). Plus some surrounding area.
+ //
+ // Note that we use os::print_hex_dump(), which is able to cope with unmapped
+ // memory (it uses SafeFetch).
+
+ st->print_cr("NMT Block at " PTR_FORMAT ", corruption at: " PTR_FORMAT ": ",
+ p2i(this), p2i(bad_address));
+ static const size_t min_dump_length = 256;
+ address from1 = align_down((address)this, sizeof(void*)) - (min_dump_length / 2);
+ address to1 = from1 + min_dump_length;
+ address from2 = align_down(bad_address, sizeof(void*)) - (min_dump_length / 2);
+ address to2 = from2 + min_dump_length;
+ if (from2 > to1) {
+ // Dump gets too large, split up in two sections.
+ os::print_hex_dump(st, from1, to1, 1);
+ st->print_cr("...");
+ os::print_hex_dump(st, from2, to2, 1);
+ } else {
+ // print one hex dump
+ os::print_hex_dump(st, from1, to2, 1);
+ }
+}
+
+// Check block integrity. If block is broken, print out a report
+// to tty (optionally with hex dump surrounding the broken block),
+// then trigger a fatal error.
+void MallocHeader::check_block_integrity() const {
+
+#define PREFIX "NMT corruption: "
+ // Note: if you modify the error messages here, make sure you
+ // adapt the associated gtests too.
+
+ // Weed out obviously wrong block addresses of NULL or very low
+ // values. Note that we should not call this for ::free(NULL),
+ // which should be handled by os::free() above us.
+ if (((size_t)p2i(this)) < K) {
+ fatal(PREFIX "Block at " PTR_FORMAT ": invalid block address", p2i(this));
+ }
+
+ // From here on we assume the block pointer to be valid. We could
+ // use SafeFetch but since this is a hot path we don't. If we are
+ // wrong, we will crash when accessing the canary, which hopefully
+ // generates distinct crash report.
+
+ // Weed out obviously unaligned addresses. NMT blocks, being the result of
+ // malloc calls, should adhere to malloc() alignment. Malloc alignment is
+ // specified by the standard by this requirement:
+ // "malloc returns a pointer which is suitably aligned for any built-in type"
+ // For us it means that it is *at least* 64-bit on all of our 32-bit and
+ // 64-bit platforms since we have native 64-bit types. It very probably is
+ // larger than that, since there exist scalar types larger than 64bit. Here,
+ // we test the smallest alignment we know.
+ // Should we ever start using std::max_align_t, this would be one place to
+ // fix up.
+ if (!is_aligned(this, sizeof(uint64_t))) {
+ print_block_on_error(tty, (address)this);
+ fatal(PREFIX "Block at " PTR_FORMAT ": block address is unaligned", p2i(this));
+ }
+
+ // Check header canary
+ if (_canary != _header_canary_life_mark) {
+ print_block_on_error(tty, (address)this);
+ fatal(PREFIX "Block at " PTR_FORMAT ": header canary broken.", p2i(this));
+ }
+
+#ifndef _LP64
+ // On 32-bit we have a second canary, check that one too.
+ if (_alt_canary != _header_alt_canary_life_mark) {
+ print_block_on_error(tty, (address)this);
+ fatal(PREFIX "Block at " PTR_FORMAT ": header alternate canary broken.", p2i(this));
}
- return ret;
+#endif
+
+ // Does block size seems reasonable?
+ if (_size >= max_reasonable_malloc_size) {
+ print_block_on_error(tty, (address)this);
+ fatal(PREFIX "Block at " PTR_FORMAT ": header looks invalid (weirdly large block size)", p2i(this));
+ }
+
+ // Check footer canary
+ if (get_footer() != _footer_canary_life_mark) {
+ print_block_on_error(tty, footer_address());
+ fatal(PREFIX "Block at " PTR_FORMAT ": footer canary broken at " PTR_FORMAT " (buffer overflow?)",
+ p2i(this), p2i(footer_address()));
+ }
+#undef PREFIX
+}
+
+bool MallocHeader::record_malloc_site(const NativeCallStack& stack, size_t size,
+ size_t* bucket_idx, size_t* pos_idx, MEMFLAGS flags) const {
+ return MallocSiteTable::allocation_at(stack, size, bucket_idx, pos_idx, flags);
}
bool MallocHeader::get_stack(NativeCallStack& stack) const {
@@ -142,18 +239,6 @@
return true;
}
-bool MallocTracker::transition(NMT_TrackingLevel from, NMT_TrackingLevel to) {
- assert(from != NMT_off, "Can not transition from off state");
- assert(to != NMT_off, "Can not transition to off state");
- assert (from != NMT_minimal, "cannot transition from minimal state");
-
- if (from == NMT_detail) {
- assert(to == NMT_minimal || to == NMT_summary, "Just check");
- MallocSiteTable::shutdown();
- }
- return true;
-}
-
// Record a malloc memory allocation
void* MallocTracker::record_malloc(void* malloc_base, size_t size, MEMFLAGS flags,
const NativeCallStack& stack, NMT_TrackingLevel level) {
@@ -175,7 +260,7 @@
assert(((size_t)memblock & (sizeof(size_t) * 2 - 1)) == 0, "Alignment check");
#ifdef ASSERT
- if (level > NMT_minimal) {
+ if (level > NMT_off) {
// Read back
assert(get_size(memblock) == size, "Wrong size");
assert(get_flags(memblock) == flags, "Wrong flags");
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/mallocTracker.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/mallocTracker.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/mallocTracker.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/mallocTracker.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -239,35 +239,98 @@
/*
* Malloc tracking header.
- * To satisfy malloc alignment requirement, NMT uses 2 machine words for tracking purpose,
- * which ensures 8-bytes alignment on 32-bit systems and 16-bytes on 64-bit systems (Product build).
+ *
+ * If NMT is active (state >= minimal), we need to track allocations. A simple and cheap way to
+ * do this is by using malloc headers.
+ *
+ * The user allocation is preceded by a header and is immediately followed by a (possibly unaligned)
+ * footer canary:
+ *
+ * +--------------+------------- .... ------------------+-----+
+ * | header | user | can |
+ * | | allocation | ary |
+ * +--------------+------------- .... ------------------+-----+
+ * 16 bytes user size 2 byte
+ *
+ * Alignment:
+ *
+ * The start of the user allocation needs to adhere to malloc alignment. We assume 128 bits
+ * on both 64-bit/32-bit to be enough for that. So the malloc header is 16 bytes long on both
+ * 32-bit and 64-bit.
+ *
+ * Layout on 64-bit:
+ *
+ * 0 1 2 3 4 5 6 7
+ * +--------+--------+--------+--------+--------+--------+--------+--------+
+ * | 64-bit size | ...
+ * +--------+--------+--------+--------+--------+--------+--------+--------+
+ *
+ * 8 9 10 11 12 13 14 15 16 ++
+ * +--------+--------+--------+--------+--------+--------+--------+--------+ ------------------------
+ * ... | bucket idx | pos idx | flags | unused | canary | ... User payload ....
+ * +--------+--------+--------+--------+--------+--------+--------+--------+ ------------------------
+ *
+ * Layout on 32-bit:
+ *
+ * 0 1 2 3 4 5 6 7
+ * +--------+--------+--------+--------+--------+--------+--------+--------+
+ * | alt. canary | 32-bit size | ...
+ * +--------+--------+--------+--------+--------+--------+--------+--------+
+ *
+ * 8 9 10 11 12 13 14 15 16 ++
+ * +--------+--------+--------+--------+--------+--------+--------+--------+ ------------------------
+ * ... | bucket idx | pos idx | flags | unused | canary | ... User payload ....
+ * +--------+--------+--------+--------+--------+--------+--------+--------+ ------------------------
+ *
+ * Notes:
+ * - We have a canary in the two bytes directly preceding the user payload. That allows us to
+ * catch negative buffer overflows.
+ * - On 32-bit, due to the smaller size_t, we have some bits to spare. So we also have a second
+ * canary at the very start of the malloc header (generously sized 32 bits).
+ * - The footer canary consists of two bytes. Since the footer location may be unaligned to 16 bits,
+ * the bytes are stored individually.
*/
class MallocHeader {
-#ifdef _LP64
- size_t _size : 64;
- size_t _flags : 8;
- size_t _pos_idx : 16;
- size_t _bucket_idx: 40;
-#define MAX_MALLOCSITE_TABLE_SIZE right_n_bits(40)
-#define MAX_BUCKET_LENGTH right_n_bits(16)
-#else
- size_t _size : 32;
- size_t _flags : 8;
- size_t _pos_idx : 8;
- size_t _bucket_idx: 16;
-#define MAX_MALLOCSITE_TABLE_SIZE right_n_bits(16)
-#define MAX_BUCKET_LENGTH right_n_bits(8)
-#endif // _LP64
+
+ NOT_LP64(uint32_t _alt_canary);
+ size_t _size;
+ uint16_t _bucket_idx;
+ uint16_t _pos_idx;
+ uint8_t _flags;
+ uint8_t _unused;
+ uint16_t _canary;
+
+#define MAX_MALLOCSITE_TABLE_SIZE (USHRT_MAX - 1)
+#define MAX_BUCKET_LENGTH (USHRT_MAX - 1)
+
+ static const uint16_t _header_canary_life_mark = 0xE99E;
+ static const uint16_t _header_canary_dead_mark = 0xD99D;
+ static const uint16_t _footer_canary_life_mark = 0xE88E;
+ static const uint16_t _footer_canary_dead_mark = 0xD88D;
+ NOT_LP64(static const uint32_t _header_alt_canary_life_mark = 0xE99EE99E;)
+ NOT_LP64(static const uint32_t _header_alt_canary_dead_mark = 0xD88DD88D;)
+
+ // We discount sizes larger than these
+ static const size_t max_reasonable_malloc_size = LP64_ONLY(256 * G) NOT_LP64(3500 * M);
+
+ // Check block integrity. If block is broken, print out a report
+ // to tty (optionally with hex dump surrounding the broken block),
+ // then trigger a fatal error.
+ void check_block_integrity() const;
+ void print_block_on_error(outputStream* st, address bad_address) const;
+ void mark_block_as_dead();
+
+ static uint16_t build_footer(uint8_t b1, uint8_t b2) { return ((uint16_t)b1 << 8) | (uint16_t)b2; }
+
+ uint8_t* footer_address() const { return ((address)this) + sizeof(MallocHeader) + _size; }
+ uint16_t get_footer() const { return build_footer(footer_address()[0], footer_address()[1]); }
+ void set_footer(uint16_t v) { footer_address()[0] = v >> 8; footer_address()[1] = (uint8_t)v; }
public:
- MallocHeader(size_t size, MEMFLAGS flags, const NativeCallStack& stack, NMT_TrackingLevel level) {
- assert(sizeof(MallocHeader) == sizeof(void*) * 2,
- "Wrong header size");
- if (level == NMT_minimal) {
- return;
- }
+ MallocHeader(size_t size, MEMFLAGS flags, const NativeCallStack& stack, NMT_TrackingLevel level) {
+ assert(size < max_reasonable_malloc_size, "Too large allocation size?");
_flags = NMTUtil::flag_to_index(flags);
set_size(size);
@@ -277,11 +340,18 @@
if (record_malloc_site(stack, size, &bucket_idx, &pos_idx, flags)) {
assert(bucket_idx <= MAX_MALLOCSITE_TABLE_SIZE, "Overflow bucket index");
assert(pos_idx <= MAX_BUCKET_LENGTH, "Overflow bucket position index");
- _bucket_idx = bucket_idx;
- _pos_idx = pos_idx;
+ _bucket_idx = (uint16_t)bucket_idx;
+ _pos_idx = (uint16_t)pos_idx;
}
}
+ _unused = 0;
+ _canary = _header_canary_life_mark;
+ // On 32-bit we have some bits more, use them for a second canary
+ // guarding the start of the header.
+ NOT_LP64(_alt_canary = _header_alt_canary_life_mark;)
+ set_footer(_footer_canary_life_mark); // set after initializing _size
+
MallocMemorySummary::record_malloc(size, flags);
MallocMemorySummary::record_new_malloc_header(sizeof(MallocHeader));
}
@@ -290,8 +360,8 @@
inline MEMFLAGS flags() const { return (MEMFLAGS)_flags; }
bool get_stack(NativeCallStack& stack) const;
- // Cleanup tracking information before the memory is released.
- void release() const;
+ // Cleanup tracking information and mark block as dead before the memory is released.
+ void release();
private:
inline void set_size(size_t size) {
@@ -301,6 +371,9 @@
size_t* bucket_idx, size_t* pos_idx, MEMFLAGS flags) const;
};
+// This needs to be true on both 64-bit and 32-bit platforms
+STATIC_ASSERT(sizeof(MallocHeader) == (sizeof(uint64_t) * 2));
+
// Main class called from MemTracker to track malloc activities
class MallocTracker : AllStatic {
@@ -308,13 +381,16 @@
// Initialize malloc tracker for specific tracking level
static bool initialize(NMT_TrackingLevel level);
- static bool transition(NMT_TrackingLevel from, NMT_TrackingLevel to);
-
// malloc tracking header size for specific tracking level
static inline size_t malloc_header_size(NMT_TrackingLevel level) {
return (level == NMT_off) ? 0 : sizeof(MallocHeader);
}
+ // malloc tracking footer size for specific tracking level
+ static inline size_t malloc_footer_size(NMT_TrackingLevel level) {
+ return (level == NMT_off) ? 0 : sizeof(uint16_t);
+ }
+
// Parameter name convention:
// memblock : the beginning address for user data
// malloc_base: the beginning address that includes malloc tracking header
@@ -349,11 +425,6 @@
return header->flags();
}
- // Get header size
- static inline size_t get_header_size(void* memblock) {
- return (memblock == NULL) ? 0 : sizeof(MallocHeader);
- }
-
static inline void record_new_arena(MEMFLAGS flags) {
MallocMemorySummary::record_new_arena(flags);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/memTracker.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/memTracker.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/memTracker.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/memTracker.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -108,41 +108,9 @@
}
}
-
-// Shutdown can only be issued via JCmd, and NMT JCmd is serialized by lock
-void MemTracker::shutdown() {
- // We can only shutdown NMT to minimal tracking level if it is ever on.
- if (tracking_level() > NMT_minimal) {
- transition_to(NMT_minimal);
- }
-}
-
-bool MemTracker::transition_to(NMT_TrackingLevel level) {
- NMT_TrackingLevel current_level = tracking_level();
-
- assert(level != NMT_off || current_level == NMT_off, "Cannot transition NMT to off");
-
- if (current_level == level) {
- return true;
- } else if (current_level > level) {
- // Downgrade tracking level, we want to lower the tracking level first
- _tracking_level = level;
- // Make _tracking_level visible immediately.
- OrderAccess::fence();
- VirtualMemoryTracker::transition(current_level, level);
- MallocTracker::transition(current_level, level);
- ThreadStackTracker::transition(current_level, level);
- } else {
- // Upgrading tracking level is not supported and has never been supported.
- // Allocating and deallocating malloc tracking structures is not thread safe and
- // leads to inconsistencies unless a lot coarser locks are added.
- }
- return true;
-}
-
// Report during error reporting.
void MemTracker::error_report(outputStream* output) {
- if (tracking_level() >= NMT_summary) {
+ if (enabled()) {
report(true, output, MemReporterBase::default_scale); // just print summary for error case.
output->print("Preinit state:");
NMTPreInit::print_state(output);
@@ -157,11 +125,8 @@
// printing the final report during normal VM exit, it should not print
// the final report again. In addition, it should be guarded from
// recursive calls in case NMT reporting itself crashes.
- if (Atomic::cmpxchg(&g_final_report_did_run, false, true) == false) {
- NMT_TrackingLevel level = tracking_level();
- if (level >= NMT_summary) {
- report(level == NMT_summary, output, 1);
- }
+ if (enabled() && Atomic::cmpxchg(&g_final_report_did_run, false, true) == false) {
+ report(tracking_level() == NMT_summary, output, 1);
}
}
@@ -189,7 +154,6 @@
out->print_cr("State: %s", NMTUtil::tracking_level_to_string(_tracking_level));
out->print_cr("Malloc allocation site table size: %d", MallocSiteTable::hash_buckets());
out->print_cr(" Tracking stack depth: %d", NMT_TrackingStackDepth);
- NOT_PRODUCT(out->print_cr("Peak concurrent access: %d", MallocSiteTable::access_peak_count());)
out->cr();
MallocSiteTable::print_tuning_statistics(out);
out->cr();
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/memTracker.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/memTracker.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/memTracker.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/memTracker.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -49,7 +49,7 @@
class MemTracker : AllStatic {
public:
static inline NMT_TrackingLevel tracking_level() { return NMT_off; }
- static inline void shutdown() { }
+ static inline bool enabled() { return false; }
static inline void init() { }
static bool check_launcher_nmt_support(const char* value) { return true; }
static bool verify_nmt_option() { return true; }
@@ -58,6 +58,7 @@
const NativeCallStack& stack, NMT_TrackingLevel level) { return mem_base; }
static inline size_t malloc_header_size(NMT_TrackingLevel level) { return 0; }
static inline size_t malloc_header_size(void* memblock) { return 0; }
+ static inline size_t malloc_footer_size(NMT_TrackingLevel level) { return 0; }
static inline void* malloc_base(void* memblock) { return memblock; }
static inline void* record_free(void* memblock, NMT_TrackingLevel level) { return memblock; }
@@ -136,14 +137,9 @@
return _tracking_level;
}
- // Shutdown native memory tracking.
- // This transitions the tracking level:
- // summary -> minimal
- // detail -> minimal
- static void shutdown();
-
- // Transition the tracking level to specified level
- static bool transition_to(NMT_TrackingLevel level);
+ static inline bool enabled() {
+ return _tracking_level > NMT_off;
+ }
static inline void* record_malloc(void* mem_base, size_t size, MEMFLAGS flag,
const NativeCallStack& stack, NMT_TrackingLevel level) {
@@ -157,11 +153,9 @@
return MallocTracker::malloc_header_size(level);
}
- static size_t malloc_header_size(void* memblock) {
- if (tracking_level() != NMT_off) {
- return MallocTracker::get_header_size(memblock);
- }
- return 0;
+ // malloc tracking footer size for specific tracking level
+ static inline size_t malloc_footer_size(NMT_TrackingLevel level) {
+ return MallocTracker::malloc_footer_size(level);
}
// To malloc base address, which is the starting address
@@ -181,20 +175,20 @@
// Record creation of an arena
static inline void record_new_arena(MEMFLAGS flag) {
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
MallocTracker::record_new_arena(flag);
}
// Record destruction of an arena
static inline void record_arena_free(MEMFLAGS flag) {
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
MallocTracker::record_arena_free(flag);
}
// Record arena size change. Arena size is the size of all arena
// chuncks that backing up the arena.
static inline void record_arena_size_change(ssize_t diff, MEMFLAGS flag) {
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
MallocTracker::record_arena_size_change(diff, flag);
}
@@ -204,11 +198,9 @@
static inline void record_virtual_memory_reserve(void* addr, size_t size, const NativeCallStack& stack,
MEMFLAGS flag = mtNone) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadCritical tc;
- // Recheck to avoid potential racing during NMT shutdown
- if (tracking_level() < NMT_summary) return;
VirtualMemoryTracker::add_reserved_region((address)addr, size, stack, flag);
}
}
@@ -216,10 +208,9 @@
static inline void record_virtual_memory_reserve_and_commit(void* addr, size_t size,
const NativeCallStack& stack, MEMFLAGS flag = mtNone) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadCritical tc;
- if (tracking_level() < NMT_summary) return;
VirtualMemoryTracker::add_reserved_region((address)addr, size, stack, flag);
VirtualMemoryTracker::add_committed_region((address)addr, size, stack);
}
@@ -228,10 +219,9 @@
static inline void record_virtual_memory_commit(void* addr, size_t size,
const NativeCallStack& stack) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadCritical tc;
- if (tracking_level() < NMT_summary) return;
VirtualMemoryTracker::add_committed_region((address)addr, size, stack);
}
}
@@ -244,28 +234,25 @@
// memory flags of the original region.
static inline void record_virtual_memory_split_reserved(void* addr, size_t size, size_t split) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadCritical tc;
- // Recheck to avoid potential racing during NMT shutdown
- if (tracking_level() < NMT_summary) return;
VirtualMemoryTracker::split_reserved_region((address)addr, size, split);
}
}
static inline void record_virtual_memory_type(void* addr, MEMFLAGS flag) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadCritical tc;
- if (tracking_level() < NMT_summary) return;
VirtualMemoryTracker::set_reserved_region_type((address)addr, flag);
}
}
static void record_thread_stack(void* addr, size_t size) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadStackTracker::new_thread_stack((address)addr, size, CALLER_PC);
}
@@ -273,7 +260,7 @@
static inline void release_thread_stack(void* addr, size_t size) {
assert_post_init();
- if (tracking_level() < NMT_summary) return;
+ if (!enabled()) return;
if (addr != NULL) {
ThreadStackTracker::delete_thread_stack((address)addr, size);
}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/nmtCommon.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/nmtCommon.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/nmtCommon.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/nmtCommon.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2013, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -28,11 +28,14 @@
#define MEMORY_TYPE_DECLARE_NAME(type, human_readable) \
human_readable,
+STATIC_ASSERT(NMT_off > NMT_unknown);
+STATIC_ASSERT(NMT_summary > NMT_off);
+STATIC_ASSERT(NMT_detail > NMT_summary);
+
const char* NMTUtil::_memory_type_names[] = {
MEMORY_TYPES_DO(MEMORY_TYPE_DECLARE_NAME)
};
-
const char* NMTUtil::scale_name(size_t scale) {
switch(scale) {
case 1: return "";
@@ -64,7 +67,6 @@
switch(lvl) {
case NMT_unknown: return "unknown"; break;
case NMT_off: return "off"; break;
- case NMT_minimal: return "minimal"; break;
case NMT_summary: return "summary"; break;
case NMT_detail: return "detail"; break;
default: return "invalid"; break;
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/nmtCommon.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/nmtCommon.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/nmtCommon.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/nmtCommon.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -41,10 +41,6 @@
// - nothing is tracked
// - no malloc headers are used
//
-// "minimal": after shutdown - NMT had been on at some point but has been switched off
-// - nothing is tracked
-// - malloc headers are allocated but not initialized not used
-//
// "summary": after initialization with NativeMemoryTracking=summary - NMT in summary mode
// - category summaries per tag are tracked
// - thread stacks are tracked
@@ -59,25 +55,16 @@
// - malloc headers are used
// - malloc call site table is allocated and used
//
-// Valid state transitions:
-//
-// unknown ----> off
-// |
-// |--> summary --
-// | |
-// |--> detail --+--> minimal
-//
// Please keep relation of numerical values!
-// unknown < off < minimal < summary < detail
+// unknown < off < summary < detail
//
enum NMT_TrackingLevel {
- NMT_unknown = 0,
- NMT_off = 1,
- NMT_minimal = 2,
- NMT_summary = 3,
- NMT_detail = 4
+ NMT_unknown,
+ NMT_off,
+ NMT_summary,
+ NMT_detail
};
// Number of stack frames to capture. This is a
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/nmtDCmd.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/nmtDCmd.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/nmtDCmd.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/nmtDCmd.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -50,8 +50,7 @@
"comparison against previous baseline, which shows the memory " \
"allocation activities at different callsites.",
"BOOLEAN", false, "false"),
- _shutdown("shutdown", "request runtime to shutdown itself and free the " \
- "memory used by runtime.",
+ _shutdown("shutdown", "deprecated.",
"BOOLEAN", false, "false"),
_statistics("statistics", "print tracker statistics for tuning purpose.", \
"BOOLEAN", false, "false"),
@@ -79,9 +78,6 @@
if (MemTracker::tracking_level() == NMT_off) {
output()->print_cr("Native memory tracking is not enabled");
return;
- } else if (MemTracker::tracking_level() == NMT_minimal) {
- output()->print_cr("Native memory tracking has been shutdown");
- return;
}
const char* scale_value = _scale.value();
@@ -148,8 +144,7 @@
output()->print_cr("No detail baseline for comparison");
}
} else if (_shutdown.value()) {
- MemTracker::shutdown();
- output()->print_cr("Native memory tracking has been turned off");
+ output()->print_cr("This option is deprecated and will be ignored.");
} else if (_statistics.value()) {
if (check_detail_tracking_level(output())) {
MemTracker::tuning_statistics(output());
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/threadStackTracker.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/threadStackTracker.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/threadStackTracker.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/threadStackTracker.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2019, 2021, Red Hat, Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -41,19 +41,6 @@
}
return true;
}
-
-bool ThreadStackTracker::transition(NMT_TrackingLevel from, NMT_TrackingLevel to) {
- assert (from != NMT_minimal, "cannot convert from the lowest tracking level to anything");
- if (to == NMT_minimal) {
- assert(from == NMT_summary || from == NMT_detail, "Just check");
- ThreadCritical tc;
- if (_simple_thread_stacks != NULL) {
- delete _simple_thread_stacks;
- _simple_thread_stacks = NULL;
- }
- }
- return true;
-}
int ThreadStackTracker::compare_thread_stack_base(const SimpleThreadStackSite& s1, const SimpleThreadStackSite& s2) {
return s1.base() - s2.base();
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/threadStackTracker.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/threadStackTracker.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/threadStackTracker.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/threadStackTracker.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2019, 2021, Red Hat, Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -72,7 +72,6 @@
static SortedLinkedList* _simple_thread_stacks;
public:
static bool initialize(NMT_TrackingLevel level);
- static bool transition(NMT_TrackingLevel from, NMT_TrackingLevel to);
static void new_thread_stack(void* base, size_t size, const NativeCallStack& stack);
static void delete_thread_stack(void* base, size_t size);
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/virtualMemoryTracker.cpp openjdk-17-17.0.7+7/src/hotspot/share/services/virtualMemoryTracker.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/virtualMemoryTracker.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/virtualMemoryTracker.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -671,20 +671,3 @@
}
return true;
}
-
-// Transition virtual memory tracking level.
-bool VirtualMemoryTracker::transition(NMT_TrackingLevel from, NMT_TrackingLevel to) {
- assert (from != NMT_minimal, "cannot convert from the lowest tracking level to anything");
- if (to == NMT_minimal) {
- assert(from == NMT_summary || from == NMT_detail, "Just check");
- // Clean up virtual memory tracking data structures.
- ThreadCritical tc;
- // Check for potential race with other thread calling transition
- if (_reserved_regions != NULL) {
- delete _reserved_regions;
- _reserved_regions = NULL;
- }
- }
-
- return true;
-}
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/services/virtualMemoryTracker.hpp openjdk-17-17.0.7+7/src/hotspot/share/services/virtualMemoryTracker.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/services/virtualMemoryTracker.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/services/virtualMemoryTracker.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2013, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -387,8 +387,6 @@
// Walk virtual memory data structure for creating baseline, etc.
static bool walk_virtual_memory(VirtualMemoryWalker* walker);
- static bool transition(NMT_TrackingLevel from, NMT_TrackingLevel to);
-
// Snapshot current thread stacks
static void snapshot_thread_stacks();
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/utilities/debug.cpp openjdk-17-17.0.7+7/src/hotspot/share/utilities/debug.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/utilities/debug.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/utilities/debug.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -237,7 +237,6 @@
static void print_error_for_unit_test(const char* message, const char* detail_fmt, va_list detail_args) {
-#ifdef ASSERT
if (ExecutingUnitTests) {
char detail_msg[256];
if (detail_fmt != NULL) {
@@ -262,7 +261,6 @@
va_end(detail_args_copy);
}
}
-#endif // ASSERT
}
void report_vm_error(const char* file, int line, const char* error_msg, const char* detail_fmt, ...)
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions.cpp openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -228,6 +228,17 @@
"*narrowklass*",
"*conflict*"
};
+const char* type2name(BasicType t) {
+ if (t < ARRAY_SIZE(type2name_tab)) {
+ return type2name_tab[t];
+ } else if (t == T_ILLEGAL) {
+ return "*illegal*";
+ } else {
+ fatal("invalid type %d", t);
+ return "invalid type";
+ }
+}
+
BasicType name2type(const char* name) {
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions.hpp openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -35,6 +35,7 @@
#include COMPILER_HEADER(utilities/globalDefinitions)
#include
+#include
class oopDesc;
@@ -718,10 +719,11 @@
extern char type2char_tab[T_CONFLICT+1]; // Map a BasicType to a jchar
inline char type2char(BasicType t) { return (uint)t < T_CONFLICT+1 ? type2char_tab[t] : 0; }
extern int type2size[T_CONFLICT+1]; // Map BasicType to result stack elements
-extern const char* type2name_tab[T_CONFLICT+1]; // Map a BasicType to a jchar
-inline const char* type2name(BasicType t) { return (uint)t < T_CONFLICT+1 ? type2name_tab[t] : NULL; }
+extern const char* type2name_tab[T_CONFLICT+1]; // Map a BasicType to a char*
extern BasicType name2type(const char* name);
+const char* type2name(BasicType t);
+
inline jlong max_signed_integer(BasicType bt) {
if (bt == T_INT) {
return max_jint;
@@ -1210,4 +1212,8 @@
}
+// Converts any type T to a reference type.
+template
+std::add_rvalue_reference_t declval() noexcept;
+
#endif // SHARE_UTILITIES_GLOBALDEFINITIONS_HPP
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions_gcc.hpp openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions_gcc.hpp
--- openjdk-17-17.0.6+10/src/hotspot/share/utilities/globalDefinitions_gcc.hpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/utilities/globalDefinitions_gcc.hpp 2023-04-12 20:11:58.000000000 +0000
@@ -144,10 +144,21 @@
#endif // _LP64
// gcc warns about applying offsetof() to non-POD object or calculating
-// offset directly when base address is NULL. Use 16 to get around the
-// warning. The -Wno-invalid-offsetof option could be used to suppress
-// this warning, but we instead just avoid the use of offsetof().
-#define offset_of(klass,field) (size_t)((intx)&(((klass*)16)->field) - 16)
+// offset directly when base address is NULL. The -Wno-invalid-offsetof
+// option could be used to suppress this warning, but we instead just
+// avoid the use of offsetof().
+//
+// FIXME: This macro is complex and rather arcane. Perhaps we should
+// use offsetof() instead, with the invalid-offsetof warning
+// temporarily disabled.
+#define offset_of(klass,field) \
+[]() { \
+ char space[sizeof (klass)] ATTRIBUTE_ALIGNED(16); \
+ klass* dummyObj = (klass*)space; \
+ char* c = (char*)(void*)&dummyObj->field; \
+ return (size_t)(c - space); \
+}()
+
#if defined(_LP64) && defined(__APPLE__)
#define JLONG_FORMAT "%ld"
diff -Nru openjdk-17-17.0.6+10/src/hotspot/share/utilities/vmError.cpp openjdk-17-17.0.7+7/src/hotspot/share/utilities/vmError.cpp
--- openjdk-17-17.0.6+10/src/hotspot/share/utilities/vmError.cpp 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/hotspot/share/utilities/vmError.cpp 2023-04-12 20:11:58.000000000 +0000
@@ -1032,6 +1032,15 @@
st->cr();
}
+#ifndef _WIN32
+ STEP("printing locale settings")
+
+ if (_verbose) {
+ os::Posix::print_active_locale(st);
+ st->cr();
+ }
+#endif
+
STEP("printing signal handlers")
if (_verbose) {
@@ -1213,6 +1222,12 @@
os::print_environment_variables(st, env_list);
st->cr();
+ // STEP("printing locale settings")
+#ifndef _WIN32
+ os::Posix::print_active_locale(st);
+ st->cr();
+#endif
+
// STEP("printing signal handlers")
os::print_signal_handlers(st, buf, sizeof(buf));
diff -Nru openjdk-17-17.0.6+10/src/java.base/linux/native/libjava/CgroupMetrics.c openjdk-17-17.0.7+7/src/java.base/linux/native/libjava/CgroupMetrics.c
--- openjdk-17-17.0.6+10/src/java.base/linux/native/libjava/CgroupMetrics.c 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/linux/native/libjava/CgroupMetrics.c 2023-04-12 20:11:58.000000000 +0000
@@ -39,5 +39,7 @@
Java_jdk_internal_platform_CgroupMetrics_getTotalMemorySize0
(JNIEnv *env, jclass ignored)
{
- return sysconf(_SC_PHYS_PAGES) * sysconf(_SC_PAGESIZE);
+ jlong pages = sysconf(_SC_PHYS_PAGES);
+ jlong page_size = sysconf(_SC_PAGESIZE);
+ return pages * page_size;
}
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/com/sun/crypto/provider/CipherCore.java openjdk-17-17.0.7+7/src/java.base/share/classes/com/sun/crypto/provider/CipherCore.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/com/sun/crypto/provider/CipherCore.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/com/sun/crypto/provider/CipherCore.java 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2002, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2002, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -813,10 +813,13 @@
if (outputCapacity < estOutSize) {
cipher.save();
}
- // create temporary output buffer if the estimated size is larger
- // than the user-provided buffer.
- internalOutput = new byte[estOutSize];
- offset = 0;
+ if (outputCapacity < estOutSize || padding != null) {
+ // create temporary output buffer if the estimated size is larger
+ // than the user-provided buffer or a padding needs to be removed
+ // before copying the unpadded result to the output buffer
+ internalOutput = new byte[estOutSize];
+ offset = 0;
+ }
}
byte[] outBuffer = (internalOutput != null) ? internalOutput : output;
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/java/lang/ProcessBuilder.java openjdk-17-17.0.7+7/src/java.base/share/classes/java/lang/ProcessBuilder.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/java/lang/ProcessBuilder.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/java/lang/ProcessBuilder.java 2023-04-12 20:11:58.000000000 +0000
@@ -1100,8 +1100,8 @@
String dir = directory == null ? null : directory.toString();
- for (int i = 1; i < cmdarray.length; i++) {
- if (cmdarray[i].indexOf('\u0000') >= 0) {
+ for (String s : cmdarray) {
+ if (s.indexOf('\u0000') >= 0) {
throw new IOException("invalid null character in command");
}
}
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/java/lang/ProcessHandleImpl.java openjdk-17-17.0.7+7/src/java.base/share/classes/java/lang/ProcessHandleImpl.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/java/lang/ProcessHandleImpl.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/java/lang/ProcessHandleImpl.java 2023-04-12 20:11:58.000000000 +0000
@@ -144,33 +144,40 @@
processReaperExecutor.execute(new Runnable() {
// Use inner class to avoid lambda stack overhead
public void run() {
- int exitValue = waitForProcessExit0(pid, shouldReap);
- if (exitValue == NOT_A_CHILD) {
- // pid not alive or not a child of this process
- // If it is alive wait for it to terminate
- long sleep = 300; // initial milliseconds to sleep
- int incr = 30; // increment to the sleep time
+ String threadName = Thread.currentThread().getName();
+ Thread.currentThread().setName("process reaper (pid " + pid + ")");
+ try {
+ int exitValue = waitForProcessExit0(pid, shouldReap);
+ if (exitValue == NOT_A_CHILD) {
+ // pid not alive or not a child of this process
+ // If it is alive wait for it to terminate
+ long sleep = 300; // initial milliseconds to sleep
+ int incr = 30; // increment to the sleep time
- long startTime = isAlive0(pid);
- long origStart = startTime;
- while (startTime >= 0) {
- try {
- Thread.sleep(Math.min(sleep, 5000L)); // no more than 5 sec
- sleep += incr;
- } catch (InterruptedException ie) {
- // ignore and retry
- }
- startTime = isAlive0(pid); // recheck if it is alive
- if (startTime > 0 && origStart > 0 && startTime != origStart) {
- // start time changed (and is not zero), pid is not the same process
- break;
+ long startTime = isAlive0(pid);
+ long origStart = startTime;
+ while (startTime >= 0) {
+ try {
+ Thread.sleep(Math.min(sleep, 5000L)); // no more than 5 sec
+ sleep += incr;
+ } catch (InterruptedException ie) {
+ // ignore and retry
+ }
+ startTime = isAlive0(pid); // recheck if it is alive
+ if (startTime > 0 && origStart > 0 && startTime != origStart) {
+ // start time changed (and is not zero), pid is not the same process
+ break;
+ }
}
+ exitValue = 0;
}
- exitValue = 0;
+ newCompletion.complete(exitValue);
+ // remove from cache afterwards
+ completions.remove(pid, newCompletion);
+ } finally {
+ // Restore thread name
+ Thread.currentThread().setName(threadName);
}
- newCompletion.complete(exitValue);
- // remove from cache afterwards
- completions.remove(pid, newCompletion);
}
});
}
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/InetAddress.java openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/InetAddress.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/InetAddress.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/InetAddress.java 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1995, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1995, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -930,6 +930,7 @@
public InetAddress[] lookupAllHostAddr(String host)
throws UnknownHostException
{
+ validate(host);
return impl.lookupAllHostAddr(host);
}
@@ -1314,6 +1315,7 @@
return ret;
}
+ validate(host);
boolean ipv6Expected = false;
if (host.charAt(0) == '[') {
// This is supposed to be an IPv6 literal
@@ -1321,44 +1323,45 @@
host = host.substring(1, host.length() -1);
ipv6Expected = true;
} else {
- // This was supposed to be a IPv6 address, but it's not!
- throw new UnknownHostException(host + ": invalid IPv6 address");
+ // This was supposed to be a IPv6 literal, but it's not
+ throw invalidIPv6LiteralException(host, false);
}
}
- // if host is an IP address, we won't do further lookup
+ // Check and try to parse host string as an IP address literal
if (IPAddressUtil.digit(host.charAt(0), 16) != -1
|| (host.charAt(0) == ':')) {
- byte[] addr;
+ byte[] addr = null;
int numericZone = -1;
String ifname = null;
- // see if it is IPv4 address
- try {
- addr = IPAddressUtil.validateNumericFormatV4(host);
- } catch (IllegalArgumentException iae) {
- var uhe = new UnknownHostException(host);
- uhe.initCause(iae);
- throw uhe;
+
+ if (!ipv6Expected) {
+ // check if it is IPv4 address only if host is not wrapped in '[]'
+ try {
+ addr = IPAddressUtil.validateNumericFormatV4(host);
+ } catch (IllegalArgumentException iae) {
+ var uhe = new UnknownHostException(host);
+ uhe.initCause(iae);
+ throw uhe;
+ }
}
if (addr == null) {
- // This is supposed to be an IPv6 literal
- // Check if a numeric or string zone id is present
+ // Try to parse host string as an IPv6 literal
+ // Check if a numeric or string zone id is present first
int pos;
- if ((pos=host.indexOf ('%')) != -1) {
- numericZone = checkNumericZone (host);
+ if ((pos = host.indexOf('%')) != -1) {
+ numericZone = checkNumericZone(host);
if (numericZone == -1) { /* remainder of string must be an ifname */
- ifname = host.substring (pos+1);
+ ifname = host.substring(pos + 1);
}
}
- if ((addr = IPAddressUtil.textToNumericFormatV6(host)) == null && host.contains(":")) {
- throw new UnknownHostException(host + ": invalid IPv6 address");
+ if ((addr = IPAddressUtil.textToNumericFormatV6(host)) == null &&
+ (host.contains(":") || ipv6Expected)) {
+ throw invalidIPv6LiteralException(host, ipv6Expected);
}
- } else if (ipv6Expected) {
- // Means an IPv4 literal between brackets!
- throw new UnknownHostException("["+host+"]");
}
- InetAddress[] ret = new InetAddress[1];
if(addr != null) {
+ InetAddress[] ret = new InetAddress[1];
if (addr.length == Inet4Address.INADDRSZ) {
if (numericZone != -1 || ifname != null) {
// IPv4-mapped address must not contain zone-id
@@ -1375,12 +1378,18 @@
return ret;
}
} else if (ipv6Expected) {
- // We were expecting an IPv6 Literal, but got something else
- throw new UnknownHostException("["+host+"]");
+ // We were expecting an IPv6 Literal since host string starts
+ // and ends with square brackets, but we got something else.
+ throw invalidIPv6LiteralException(host, true);
}
return getAllByName0(host, reqAddr, true, true);
}
+ private static UnknownHostException invalidIPv6LiteralException(String host, boolean wrapInBrackets) {
+ String hostString = wrapInBrackets ? "[" + host + "]" : host;
+ return new UnknownHostException(hostString + ": invalid IPv6 address literal");
+ }
+
/**
* Returns the loopback address.
*
@@ -1802,6 +1811,12 @@
pf.put("family", holder().getFamily());
s.writeFields();
}
+
+ private static void validate(String host) throws UnknownHostException {
+ if (host.indexOf(0) != -1) {
+ throw new UnknownHostException("NUL character not allowed in hostname");
+ }
+ }
}
/*
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/URI.java openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/URI.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/URI.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/URI.java 2023-04-12 20:11:58.000000000 +0000
@@ -2135,10 +2135,12 @@
path = base.substring(0, i + 1);
} else {
StringBuilder sb = new StringBuilder(base.length() + cn);
- // 5.2 (6a)
- if (i >= 0)
+ // 5.2 (6a-b)
+ if (i >= 0 || !absolute) {
sb.append(base, 0, i + 1);
- // 5.2 (6b)
+ } else {
+ sb.append('/');
+ }
sb.append(child);
path = sb.toString();
}
diff -Nru openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/URLConnection.java openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/URLConnection.java
--- openjdk-17-17.0.6+10/src/java.base/share/classes/java/net/URLConnection.java 2023-01-10 13:21:55.000000000 +0000
+++ openjdk-17-17.0.7+7/src/java.base/share/classes/java/net/URLConnection.java 2023-04-12 20:11:58.000000000 +0000
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1995, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1995, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -611,10 +611,12 @@
* missing or malformed.
*/
public int getHeaderFieldInt(String name, int Default) {
- String value = getHeaderField(name);
- try {
- return Integer.parseInt(value);
- } catch (Exception e) { }
+ final String value = getHeaderField(name);
+ if (value != null) {
+ try {
+ return Integer.parseInt(value);
+ } catch (NumberFormatException e) { }
+ }
return Default;
}
@@ -634,10 +636,12 @@
* @since 1.7
*/
public long getHeaderFieldLong(String name, long Default) {
- String value = getHeaderField(name);
- try {
- return Long.parseLong(value);
- } catch (Exception e) { }
+ final String value = getHeaderField(name);
+ if (value != null) {
+