PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label jvm. Show all posts
Showing posts with label jvm. Show all posts

Thursday, November 3, 2022

[FIXED] How are Lambda Expressions Translated Into Java Byte Code

 November 03, 2022     compiler-theory, java, java-8, jvm, lambda     No comments   

Issue

I am trying to create an example using lambda expression in java and i am using offical JDK8. My example was run successfully. But when i trying to check how the compiler translate lambda expression into byte code, this makes me some confusion.Following is the code of my example:-

public class LambdaTest {
    public Integer lambdaBinaryOpertor(BinaryOperator<Integer> binaryOperator) {
        return binaryOperator.apply(60, 72);
    }

    public static void main(String[] args) {
        LambdaTest test = new LambdaTest();
        BinaryOperator<Integer> binaryOperator = (a, b) -> a*b;
        System.out.println("Additon using Lambda BinaryOperator: "+test.lambdaBinaryOpertor(binaryOperator));
    }
}

In this Article, they discuss about how compiler translate the lambda expressions into byte code. According to this document the lambda expression convert into static method and the location where lambda expression declare, have reference of lambda static method. The following example is in the article :

//Source code
class A {
    public void foo() {
        List<String> list = ...
        list.forEach( s -> { System.out.println(s); } );
    }
} 

//After compile above code "translate code " 
class A {
    public void foo() {
        List<String> list = ...
        list.forEach( [lambda for lambda$1 as Block] );
    }

    static void lambda$1(String s) {
        System.out.println(s);
    }
}

My example run fine and give us the appropriate result. But when i trying to run the javap command for check the byte code of class, there is no static method for lambdas in byte code.

c:\>javap LambdaTest
Compiled from "LambdaTest.java"
public class LambdaTest {
public LambdaTest();
public java.lang.Integer lambdaBinaryOpertor(java.util.function.BinaryOperator <java.lang.Integer>);
public static void main(java.lang.String[]);
}

In the case of generics, bridge method is created by compiler and we will also check this method using javap command, but in the case of lambdas there is no static method. The article was published at 2012 and java 8 was launch at march 2014. So i have some queries regarding transnational of lambda:

  1. Is there any new implementation used for lambda in JDK 8 after this article release or i will do something wrong to check the lambda method ?
  2. How really the compiler deal with lambda expression?
  3. How the JVM invoke the lambda exression ?

Solution

Use javap additional arguments to print full information about class: javap -v -p -s -c

For your example, source code of lambda will be:

private static java.lang.Integer lambda$main$0(java.lang.Integer, java.lang.Integer);
    descriptor: (Ljava/lang/Integer;Ljava/lang/Integer;)Ljava/lang/Integer;
    flags: ACC_PRIVATE, ACC_STATIC, ACC_SYNTHETIC
    Code:
      stack=2, locals=2, args_size=2
         0: aload_0       
         1: invokevirtual #17                 // Method java/lang/Integer.intValue:()I
         4: aload_1       
         5: invokevirtual #17                 // Method java/lang/Integer.intValue:()I
         8: imul          
         9: invokestatic  #2                  // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
        12: areturn       
      LineNumberTable:
        line 10: 0
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0      13     0     a   Ljava/lang/Integer;
            0      13     1     b   Ljava/lang/Integer;
}


Answered By - Ιναη ßαbαηιη
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, October 31, 2022

[FIXED] Why is this JOML (JVM) code so much faster than the equivalent GSL (C)?

 October 31, 2022     benchmarking, c, java, jvm, performance     No comments   

Issue

I am attempting to optimize a small library for doing arithmetic on vectors.

To roughly check my progress, I decided to benchmark the performance of two popular vector arithmetic libraries written in two different languages, the GNU Scientific Library (GSL, C), and the Java OpenGL Math Library (JOML, JVM). I expected GSL, as a large project written in C and compiled ahead of time, to be significantly faster than JOML, with extra baggage from object management, method calls, and conforming to the Java specifications.

Surprisingly, instead JOML (JVM) ended up being around 4X faster than GSL (C). I wish to understand why this is the case.

The benchmark I performed was to compute 4,000,000 iterations of Leibniz's formula to calculate Pi, in chunks of 4 at a time via 4-dimensioned vectors. The exact algorithm doesn't matter, and doesn't have to make sense. This was just the first and simplest thing I thought of that would let me use multiple vector operations per iteration.

This is the C code in question:

#include <stdio.h>
#include <time.h>
#include <gsl/gsl_vector.h>
#include <unistd.h>
#include <math.h>
#include <string.h>

#define IT 1000000

double pibench_inplace(int it) {
    gsl_vector* d = gsl_vector_calloc(4);
    gsl_vector* w = gsl_vector_calloc(4);
    for (int i=0; i<4; i++) {
        gsl_vector_set(d, i, (double)i*2+1);
        gsl_vector_set(w, i, (i%2==0) ? 1 : -1);
    }
    gsl_vector* b = gsl_vector_calloc(4);
    double pi = 0.0;
    for (int i=0; i<it; i++) {
        gsl_vector_memcpy(b, d);
        gsl_vector_add_constant(b, (double)i*8);
        for (int i=0; i<4; i++) {
            gsl_vector_set(b, i, pow(gsl_vector_get(b, i), -1.));
        }
        gsl_vector_mul(b, w);
        pi += gsl_vector_sum(b);
    }
    return pi*4;
}

double pibench_fast(int it) {
    double pi = 0;
    int eq_it = it * 4;
    for (int i=0; i<eq_it; i++) {
        pi += (1 / ((double)i * 2 + 1) * ((i%2==0) ? 1 : -1));
    }
    return pi*4;
}

int main(int argc, char* argv[]) {
    if (argc < 2) {
        printf("Please specific a run mode.\n");
        return 1;
    }
    double pi;
    struct timespec start = {0,0}, end={0,0};
    clock_gettime(CLOCK_MONOTONIC, &start);
    if (strcmp(argv[1], "inplace") == 0) {
        pi = pibench_inplace(IT);
    } else if (strcmp(argv[1], "fast") == 0) {
        pi = pibench_fast(IT);
    } else {
        sleep(1);
        printf("Please specific a valid run mode.\n");
    }
    clock_gettime(CLOCK_MONOTONIC, &end);
    printf("Pi: %f\n", pi);
    printf("Time: %f\n", ((double)end.tv_sec + 1.0e-9*end.tv_nsec) - ((double)start.tv_sec + 1.0e-9*start.tv_nsec));
    return 0;
}

This is how I built and ran the C code:

$ gcc GSL_pi.c -O3 -march=native -static $(gsl-config --cflags --libs) -o GSL_pi && ./GSL_pi inplace

Pi: 3.141592
Time: 0.061561

This is the JVM-platform code in question (written in Kotlin):

package joml_pi

import org.joml.Vector4d
import kotlin.time.measureTimedValue
import kotlin.time.DurationUnit


fun pibench(count: Int=1000000): Double {
    val d = Vector4d(1.0, 3.0, 5.0, 7.0)
    val w = Vector4d(1.0, -1.0, 1.0, -1.0)
    val c = Vector4d(1.0, 1.0, 1.0, 1.0)
    val scratchpad = Vector4d()
    var pi = 0.0
    for (i in 0..count) {
        scratchpad.set(i*8.0)
        scratchpad.add(d)
        c.div(scratchpad, scratchpad)
        scratchpad.mul(w)
        pi += scratchpad.x + scratchpad.y + scratchpad.z + scratchpad.w
    }
    return pi * 4.0
}

@kotlin.time.ExperimentalTime
fun <T> benchmark(func: () -> T, name: String="", count: Int=5) {
    val times = mutableListOf<Double>()
    val results = mutableListOf<T>()
    for (i in 0..count) {
        val result = measureTimedValue<T>( { func() } )
        results.add(result.value)
        times.add(result.duration.toDouble(DurationUnit.SECONDS))
    }
    println(listOf<String>(
            "",
            name,
            "Results:",
            results.joinToString(", "),
            "Times:",
            times.joinToString(", ")
    ).joinToString("\n"))
}

@kotlin.time.ExperimentalTime
fun main(args: Array<String>) {
    benchmark<Double>(::pibench, "pibench")
}

This is how I built and ran the JVM-platform code:

$ kotlinc -classpath joml-1.10.5.jar JOML_pi.kt && kotlin -classpath joml-1.10.5.jar:. joml_pi/JOML_piKt.class

pibench
Results:
3.1415924035900464, 3.1415924035900464, 3.1415924035900464, 3.1415924035900464, 3.1415924035900464, 3.1415924035900464
Times:
0.026850784, 0.014998012, 0.013095291, 0.012805373, 0.012977388, 0.012948186

There are multiple possiblities I have considered for why this operation run in the JVM here is apparently several times faster than the equivalent C code. I do not think any of them are particularly compelling:

  • I'm doing different iteration counts by an order of magnitude in the two languages. — It's possible I'm grossly misreading the code, but I'm pretty sure this isn't the case.
  • I've fudged up the algorithm and am doing vastly different things in each case. — Again maybe I've misread it, but I don't think that's happening, and both cases do produce numerically correct results.
  • The timing mechanism I use for C introduces a lot of overhead. — I also tested simpler and no-op functions. They completed and were measured as expected in much less time.
  • The JVM code is parallelized across multiple processor cores — With many more iterations, I watched my CPU use over a longer period and it did not exceed one core.
  • The JVM code makes better use of SIMD/vectorization. — I compiled the C with -O3 and -march=native, statically linking against libraries from Debian packages. In another case I even tried the -floop/-ftree parallelization flags. Either way performance didn't really change.
  • GSL has extra features that add overhead in this particular test. — I also have another version, with the vector class implemented and used through Cython, that does only the basics (iterating over a pointer), and performs roughly equivalently to GSL (with slightly more overhead, as expected). So that seems to be the limit for native code.
  • JOML is actually using native code. — The README says it makes no JNI calls, I'm importing directly from a multi-platform .jar file that I've checked and contains only .class files, and the JNI adds ~20 Java ops of overhead to every call so even if it had magical native code that shouldn't help anyway at such a granular level.
  • The JVM has different specifics for floating point arithmetic. — The JOML class I used accepts and returns "doubles" just as the C code. In any case, having to emulate a specification that deviates from hardware capabilities probably shouldn't improve performance like this.
  • The exponential reciprocal step in my GSL code is less efficient than the division reciprocal step in my JOML code. — While commenting that out does reduce total execution time by around 25% (to ~0.045s), that still leaves a massive 3X gap with the JVM code (~0.015s).

The only remaining explanation I can think of is that most of the time spent in C is overhead from doing function calls. This would seem consistent with the fact that implementations in C and Cython perform similarly. Then, the performance advantage of the Java/Kotlin/JVM implementation comes from its JIT being able to optimize away the function calls by effectively inlining everything in the loop. However, given the reputation of JIT compilers as being only theoretically, slightly faster than native code in favourable conditions, that still seems like a massive speedup just from having a JIT.

I suppose if that is the case, then a follow-up question would be whether I could realistically or reliably expect these performance characteristics to carry over outside of a synthetic toy benchmark, in applications that may have much more scattered numeric calls rather than a single million-iteration loop.


Solution

First, a disclaimer: I am the author of JOML.

Now, you are probably not comparing apples with apples here. GSL is a general purpose linear algebra library supporting many different linear algebra algorithms and data structures.

JOML, on the other hand is not a general purpose linear algebra library, but a special purpose library covering only the use-cases of compute graphics, so it only contains very concrete classes for only 2-, 3- and 4-dimensional vectors and 2x2, 3x3 and 4x4 (and non-square variants) matrices. In other words, even if you wanted to allocate a 5-dimensional vector, you couldn't with JOML.

Therefore, all the algorithms and data structures in JOML are explicitly designed on classes with x, y, z and w fields. Without any loops. So, a 4-dimensional vector add is literally just:

dest.x = this.x + v.x;
dest.y = this.y + v.y;
dest.z = this.z + v.z;
dest.w = this.w + v.w;

And there isn't even any SIMD involved in that, because as of now, there is no JVM JIT that can auto-vectorize over different fields of a class. Thus, a vector add (or multiply; or any lane-wise) operation right now will produce exactly these scalar operations.

Next, you say:

JOML is actually using native code. — The README says it makes no JNI calls, I'm importing directly from a multi-platform .jar file that I've checked and contains only .class files, and the JNI adds ~20 Java ops of overhead to every call so even if it had magical native code that shouldn't help anyway at such a granular level.

JOML itself does not define and use native code via the JNI interface. Of course, the operators and JRE methods that JOML uses internally will get intrinsified to native code, but not via the JNI interface. Rather, all methods (such as Math.fma()) will get intrinsified directly into their machine code equivalents at JIT compilation time.

Now, as pointed out by others in the comments to your questions: You are using a linked library (as opposed to a headers-only library like GLM, which would probably be a better fit for your C/C++ code). So, a C/C++ compiler probably won't be able to "see through" your call-site to the callee and apply optimizations there based on the static information that it has at the callsite (like you calling gsl_vector_calloc with the argument 4). So, every runtime checking/branching on the argument that GSL needs to do, will still have to happen at runtime. This would be quite different to when using a headers-only library (like GLM), where any half-decent C/C++ will for sure optimize all the things away based on the static knowledge of your calls/code. And I would assume that, yes, an equivalent C/C++ program would beat a Java/Scala/Kotlin/JVM program in speed.

So, your comparison of GSL and JOML is somewhat like comparing the performance of Microsoft Excel evaluating a cell with content = 1 + 2 with writing C code that effectively outputs printf("%f\n", 1.0 + 2.0);. The former (Microsoft Excel, here being GSL) is much more general and versatile while the latter (JOML) is highly specialized.

It just so happens that the specialization fits to your exact use-case right now, making it even possible to use JOML for that.



Answered By - httpdigest
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, October 18, 2022

[FIXED] why the MaxHeapSize values are different between java -XX:+PrintFlagsFinal and jinfo -flag MaxHeapSize

 October 18, 2022     docker, heap-memory, java, jvm     No comments   

Issue

I'm running my java application on apline linux system in docker container, and I wanna find out the value of MaxHeapSize, so I use several command : java -XX:+PrintFlagsFinal, jinfo -flag MaxHeapSize , jmap -heap, but the output made me feel confused. The output of jinfo -flag MaxHeapSize , jmap -heap are consistent. However, The output of java -XX:+PrintFlagsFinal is different.so why did this happen?

The default container memory Limit setting is 4096MiB.

enter image description here

The output of java commonds is shown below.(I marked some important parts in the picture)

bash-5.0# jps -v
9 jar -Dfile.encoding=utf-8 -XX:+UseG1GC -XX:+UseStringDeduplication -XX:-OmitStackTraceInFastThrow -XX:MaxRAMPercentage=60.0 -XX:InitialRAMPercentage=20.0 -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintCommandLineFlags -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Xloggc:log/gc-%t.log -Duser.timezone=Asia/Shanghai -Delastic.apm.service_name=SUPER-STUDENTS -Delastic.apm.environment=k8s-prod-th-zhidao-manhattan -Delastic.apm.server_urls= -Delastic.apm.trace_methods= -Delastic.apm.trace_methods_duration_threshold=100ms -Delastic.apm.application_packages=outfox -Delastic.apm.capture_body=all -Delastic.apm.ignore_message_queues=* -Delastic.apm.profiling_inferred_spans_enabled=true -Delastic.apm.profiling_inferred_spans_sampling_interval=10ms -Delastic.apm.profiling_inferred_spans_min_duration=50ms -Dskywalking.agent.service_name=super-students -Dskywalking.agent.instance_name=super-students-75f964dbbd-5gfnv -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1234
64155 Jps -Dapplication.home=/opt/java/openjdk -Xms8m
bash-5.0# java -XX:+PrintFlagsFinal -version | grep -Ei "maxheapsize|maxram"
    uintx DefaultMaxRAMFraction                     = 4                                   {product}
    uintx MaxHeapSize                              := 1073741824                          {product}
 uint64_t MaxRAM                                    = 137438953472                        {pd product}
    uintx MaxRAMFraction                            = 4                                   {product}
   double MaxRAMPercentage                          = 25.000000                           {product}
openjdk version "1.8.0_282"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_282-b08)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.282-b08, mixed mode)
bash-5.0# jinfo -flag MaxHeapSize 9
-XX:MaxHeapSize=2577399808
bash-5.0# jmap -heap 9
Attaching to process ID 9, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.282-b08

using thread-local object allocation.
Garbage-First (G1) GC with 1 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 2577399808 (2458.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 1545601024 (1474.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 1048576 (1.0MB)

Heap Usage:
G1 Heap:
   regions  = 2458
   capacity = 2577399808 (2458.0MB)
   used     = 320120112 (305.2903289794922MB)
   free     = 2257279696 (2152.709671020508MB)
   12.420273758319455% used
G1 Young Generation:
Eden Space:
   regions  = 53
   capacity = 654311424 (624.0MB)
   used     = 55574528 (53.0MB)
   free     = 598736896 (571.0MB)
   8.493589743589743% used
Survivor Space:
   regions  = 10
   capacity = 10485760 (10.0MB)
   used     = 10485760 (10.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 247
   capacity = 389021696 (371.0MB)
   used     = 254059824 (242.2903289794922MB)
   free     = 134961872 (128.7096710205078MB)
   65.30736630174992% used

63962 interned Strings occupying 6772928 bytes.

enter image description here


Solution

These are not comparing the same thing.

When running jmap or jstack, these attach to the existing process with PID 9, as listed in the first jps command.

When running java -XX:+PrintFlagsFinal -version, this creates a new JVM process, and prints the information for that new process. Note that the original PID 9 process has a number of additional flags that can affect the calculated heap size.

For a more accurate comparison, you could add the -XX:+PrintFlagsFinal flags to the main command run when the container starts. I would expect this to match the values returned by jinfo and jmap.



Answered By - Tim Moore
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, September 12, 2022

[FIXED] How to distribute Java Application

 September 12, 2022     cross-platform, distribution, jar, java, jvm     No comments   

Issue

I would like to know about the various options for distributing a Java application.

I know that you can

  • Distribute the Source Code and let users compile it themselves, or provide make files, etc..
  • Package it into a JAR, and have self extracting archives
  • and (I'm sure, myriad other ways)

I'm hoping for some explanations about the most common options (and one's I haven't thought of) and in particular, do they require a user to have a JVM, or can it be bundled with one - personally I'm not too fond of an installer which halts due to a lack of JVM. Who says an app needs an installer, stand-alone solutions are fine too.

Also, worth asking is how to handle cross-platform distributing, exe's vs dmg's, etc...

My primary motivation for this question (which I appreciate is similar to others) is to find solutions that don't require the user to already have a JVM installed - but for completeness, I'm asking generally.

Thanks very much


Solution

Distribute the Source Code and let users compile it themselves, or provide make files, etc..

This is probably ok for open source projects, but very unusual for anything commercial. I'd recommend providing it as an option for the techies, but distributing JARs also

Package it into a JAR

I'd call this the best practice

and have self extracting archives

How about making the jar executable instead?

I'm hoping for some explanations about the most common options (and one's I haven't thought of) and in particular, do they require a user to have a JVM, or can it be bundled with one - personally I'm not too fond of an installer which halts due to a lack of JVM.

I don't think it's legal to bundle JREs. That said, it's rather obvious that a java-based solution won't work without Java. OpenOffice and many others fail to install without an installed JRE. I'd say that's understandable and OK.

IzPack seems to be a good solution to create Java-based installers.

My primary motivation for this question (which I appreciate is similar to others) is to find solutions that don't require the user to already have a JVM installed

As I wrote, I think it's not legal to bundle the JRE [UPDATE: it is legal, read this document for reference] (and also not a good option, as you'd have to bundle many different OS / architecture combinations). So the only other way would be native compilation (can't help you with that, sorry).



Answered By - Sean Patrick Floyd
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, July 19, 2022

[FIXED] How does JVM makes raw type wrapper "Immutable" when passing function parameter?

 July 19, 2022     immutability, integer, java, jvm, wrapper     No comments   

Issue

I have a simple code snippet to test "Immutable" integer like below:

    public static void changeInteger(Integer i) {
        ++i;
    }
    public static void main(String[] args) {
        Integer i = new Integer(3);
        ++i;
        System.out.println(i); // 4
        ImmutableWrapper.changeInteger(i);
        System.out.println(i); // still 4!
    }

I could see as Java language design, a wrapper class Integer is immutable when passed as function parameter.

What confused me is, why the ````++i``` inside main() will increase the value, while passing into function, doesn't change?

I wonder how does Java compiler or JVM achieve this? I know that Integer is a reference type, so when passed as parameter, an reference is passed, and then ++i will change its wrapped value?

Wish technical explanation.


Solution

Autoboxing and unboxing is the automatic conversion that compiler helps us to converse between the primitive types and wrapper classes.
https://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.15.1 says:
If necessary, the sum is narrowed by a narrowing primitive conversion (§5.1.3) and/or subjected to boxing conversion (§5.1.7) to the type of the variable before it is stored. The value of the prefix increment expression is the value of the variable after the new value is stored
So, ++i equals to i = Integer.valueOf(i.intValue() + 1)



Answered By - shanfeng
Answer Checked By - Terry (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, July 10, 2022

[FIXED] Where is the super reference in a Java instance method's stack frame?

 July 10, 2022     java, jvm, reference, stack, super     No comments   

Issue

I read Bill Venner's excellent Inside the Java Virtual Machine book, which, in Chapter 5 explores in detail, among other things, the composition of a JVM's stack frame. (This chapter from the book also happens to be officially published here: https://www.artima.com/insidejvm/ed2/jvm8.html) Apart from this book I studied relatively much the runtime data areas of some JVM's, especially their stack and heap.

In an instance method's stack frame, the local variables section constitutes an array of words which holds the method arguments (or parameters), local variables and the "hidden" this reference.

What I'd like to know is, where is the super reference stored, as that is also always available in any non-static context (i.e. instance method body or initializer block), except in the Object class. Is it stored somewhere alongside the reference "this"? If yes, then why is it seemingly always left out from stack frame representations/overviews?


Solution

There is no "super" reference.

When you do:

super.foo()

You "seem" to be calling foo on an object called "super", but that's merely Java's syntax, and doesn't have to reflect what's happening under the hood. When this call is translated, it is translated to a invokespecial instruction, that invokes the superclass's foo method.

Compare this to a this.foo() call, which translates to a invokevirtual instruction. Unlike invokespecial, this will do dynamic dispatch, selecting the right method to call, depending on the runtime type of this.

Note that in both cases, there is a aload_0 instruction before invoking the method, loading the this reference onto the stack.



Answered By - Sweeper
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing